Simulator-based training of generative models for the inverse design of metasurfaces

Simulator-based training of generative models for the inverse design of metasurfaces

Jiaqi Jiang
Stanford University
jiangjq@stanford.edu
&Jonathan A. Fan
Stanford University
jonfan@stanford.edu
Abstract

Metasurfaces are subwavelength-structured artificial media that can shape and localize electromagnetic waves in unique ways. The inverse design of these devices is a non-convex optimization problem in a high dimensional space, making global optimization a major challenge. We present a new type of population-based global optimization algorithm for metasurfaces that is enabled by the training of a generative neural network. The loss function used for backpropagation depends on the generated pattern layouts, their efficiencies, and efficiency gradients, which are calculated by the adjoint variables method using forward and adjoint electromagnetic simulations. We observe that the distribution of devices generated by the network continuously shifts towards high performance design space regions over the course of optimization. Upon training completion, the best generated devices have efficiencies comparable to or exceeding the best devices designed using standard topology optimization. Our proposed global optimization algorithm can generally apply to other gradient-based optimization problems in optics, mechanics and electronics.

1 Introduction

Photonic technologies serve to manipulate, guide, and filter electromagnetic waves propagating in free space and in waveguides. Due to the strong dependence of electromagnetic function on geometry, much emphasis in the field has been placed on identifying geometric designs for these devices given a desired optical response. The vast majority of existing design concepts utilize relatively simple shapes that can be described using physically intuition. For example, silicon photonic devices typically utilize adiabatic tapers and ring resonators to route and filter guided waves Jalali2006 , and metasurfaces, which are diffractive optical components used for wavefront engineering, typically utilize arrays of nanowaveguides or nanoresonators comprising simple shapes Genevet2017 . While these design concepts work well for certain applications, they possess limitations, such as narrow bandwidths and sensitivity to temperature, which prevent the further advancement of these technologies.

To overcome these limitations, design methodologies based on optimization have been proposed. Among the most successful of these concepts is gradient-based topology optimization, which uses the adjoint variables method to iteratively adjust the dielectric composition of the devices and improve device functionality Vuckovic2018 ; Sigmund2011 ; Sawyer2019 ; sigmund2013topology ; sigmund1997design ; lalau2013adjoint . This design method, based on gradient descent, has enabled the realization of high performance, robust Wang2019 devices with nonintuitive layouts, including new classes of on-chip photonic devices with ultrasmall footprints sigmund2004 ; Vuckovic2015 , non-linear photonic switches SFan2018 , and diffractive optical components that can deflect sell2017large ; JianjiOptExp2017 ; SellAdvOpt2017 ; JianjiAnnalen2018 ; SellACS2018 and focus Loncar2018 ; peter2019metalens electromagnetic waves with high efficiencies. While gradient-based topology optimization has great potential, it is a local optimizer and depends strongly on the initial distribution of dielectric material making up the devices Jianji2017 . The identification of a high performance device is therefore computationally expensive, as it requires the optimization of multiple random initial dielectric distributions and selecting the best device.

We present a detailed mathematical discussion of a new global optimization concept based on Global Topology Optimization Networks (GLOnets) jiang2019global , which combine adjoint variables electromagnetic calculations with the training of a generative neural network to realize high performance photonic structures. Unlike gradient-based topology optimization, which optimizes one device at a time, our approach is population-based and optimizes a distribution of devices, thereby enabling a global search of the design space. As a model system, we will apply our concept to design periodic metasurfaces, or metagratings, which selectively deflect a normally incident beam to the +1 diffraction order. In our previous work jiang2019global , we demonstrated that GLOnets conditioned on incident wavelength and deflection angle can generate ensembles of high efficiency metagratings. In this manuscript, we examine the underlying mathematical theory behind GLOnets, specifically a derivation of the objective and loss functions, discussion of the training process, interpretation of hyperparameters, and calculations of baseline performance metrics for unconditional GLOnets. We emphasize that our proposed concepts are general and apply broadly to design problems in photonics and other fields in the physical sciences in which the adjoint variables method applies.

2 Related Machine Learning Work

In recent years, deep learning has been investigated as a tool to facilitate the inverse design of photonic devices. Many efforts have focused on using deep neural networks to learn the relationship between device geometry and optical response zhang2003 ; Figueroa2012 , leading to trained networks serving as surrogate models mimicking electromagnetic solvers. These networks have been be used together with classical optimization methods, such as simulated annealing or particle swarm algorithms, to optimize a device cruz2013 ; Assuncao2010 . Device geometries have also beens directly optimized from a trained network by using gradients from backpropagation peurifoy2018nanophotonic ; inampudi2018neural ; ma2018deep ; liu2018training . These methods work well on simple device geometries described by a few parameters. However, the model accuracy decreases as the geometric degrees of freedom increase, making the scaling of these ideas to the inverse design of complex systems unfeasible.

An alternative approach is to utilize generative adversarial networks (GANs) goodfellow2014generative , which have been proposed as a tool for freeform device optimization liu2018generative ; jiang2019free ; so2019designing . GANs have been of great interest in recent years and have a broad range applications, including image generation brock2018large ; karras2017progressive , image synthesis zhu2018visual , image translation zhu2017unpaired , and super resolution imaging ledig2017photo . In the context of photonics inverse design, GANs are provided images of high performance devices, and after training, they can generate high performance device patterns with geometric features mimicking the training set jiang2019free . With this approach, devices from a trained GAN can be produced with low computational cost, but a computationally expensive training set is required. New data-driven concepts that can reduce or even eliminate the need for expensive training data would dramatically expand the scope and practicality of machine learning-enabled device optimization.

3 Problem Setup

Figure 1: Schematic of a silicon metagrating that deflects normally incident TM-polarized light of wavelength to an outgoing angle . The optimization objective is to search for the metagrating pattern that maximizes deflection efficiency.

The metagratings consist of silicon nanoridges and deflect normally-incident light to the +1 diffraction order (Figure 1). The thickness of the gratings is fixed to be 325 nm and the incident light is TM-polarized. The refractive index of silicon is taken from Ref. Silicon2008 and only the real part of the index is used to simplify the design problem. For each period, the metagrating is subdivided into segments, each possessing a refractive index value between silicon and air during the optimization process. These refractive index values are the design variable in our problem and are specified as (a vector). Deflection efficiency is defined as the intensity of light deflected to the desired direction, defined by angle , normalized to the incident light intensity. The deflection efficiency is a nonlinear function of index profile and is governed by Maxwell’s equations. This quantity, together with the electric field profiles within a device, can be accurately solved using electromagnetic solvers.

Our optimization objective is to maximize the deflection efficiency of the metagrating at a specific operating wavelength and outgoing angle :

(1)

The term represents the globally optimized device pattern, and it has an efficiency of . We are interested in physical devices that possess binary index values in the vector: , where -1 represents air and +1 represents silicon.

4 Methods

Figure 2: Schematic of generative neural network-based optimization.

Our proposed inverse design scheme is shown in Figure 2 and involves the training of a generative neural network to optimize a population of devices. Uniquely, our scheme does not require any training set. The input of the generator is a random noise vector and it has the same dimension as the output device index profile . The generator is parameterized by , which relates to through a nonlinear mapping: . In other words, the generator maps a uniform distribution of noise vectors to a device distribution , where defines the probability of generating in the device space . We frame the objective of the optimization as maximizing the probability of generating the globally optimized device in :

(2)

4.1 Loss Function Formulation

While our objective function above is rigorous, it cannot be directly used for network training due to two reasons. The first is that the derivative of the function is nearly always zero. To circumvent this issue, we express the function as the following:

(3)

By substituting the function with this Gaussian form and leaving as a tunable parameter, we relax Equation 2 and it becomes:

(4)

As we will see later, the inclusion of as a tunable hyperparameter turns out to be important for stabilizing the network training process in the limit of training with a finite batch size.

The second reason is that the objective function depends on , which is unknown. To address this problem, we approximate Equation 4 with a different function, namely the exponential function:

(5)

This approximation is valid because and our new function only needs to approximate that in Equation 4 for efficiency values less than . With this approximation, we can remove from the integral:

(6)

now becomes a normalization constant and does not require explicit evaluation. Alternatives to the exponential function can be considered and tailored depending on the specific optimization problem. For this study, we will use Equation 6.

In practice, it is not possible to evaluate Equation 6 over the entire design space . We instead sample a batch of devices from , which leads to further approximation of the objective function:

(7)
(8)

We note that the deflection efficiency of device is calculated using an electromagnetic solver, such that is not directly differentiable for backpropagation. To bypass this problem, we use the adjoint variables method to compute the efficiency gradient with respect to the refractive indices for device : (Figure 2). Details pertaining to these gradient calculations can be found in other inverse design papers Vuckovic2015 ; SFan2018 ; sell2017large . To summarize, electric field profiles within the device layer are calculated using two different electromagnetic excitation conditions. The first is the forward simulation, in which are calculated by propagating a normally-incident electromagnetic wave from the substrate to the device, as shown in Figure 1. The second is the adjoint simulation, in which are calculated by propagating an electromagnetic wave in the direction opposite of the desired outgoing direction. The efficiency gradient is calculated by integrating the overlap of those electric field terms:

(9)

Finally, we use our adjoint gradients and objective function to define the loss function . Our goal is to define such that minimizing is equivalent to maximizing the objective function during generator training. With this definition, must satisfy and is defined as:

(10)

and are treated as independent variables calculated from electromagnetic simulations and have no dependence on . Finally, we add a regularization term to to ensure that the generated patterns are binary. This term reaches a minimum when the generated patterns are fully binarized. A coefficient is introduced to balance binarization with efficiency enhancement, and we have as our final loss function:

(11)

4.2 Network Architecture

The architecture of the generative neural network is adapted from DCGAN radford2015unsupervised , which comprises 2 fully connected layers, 4 transposed convolution layers, and a Gaussian filter at the end to eliminate small features. LeakyReLU is used for all activation functions except for the last layer, which uses a tanh activation function. We also add dropout layers and batchnorm layers to enhance the diversity of the generated patterns. Periodic paddings are used to account for the fact that the devices are periodic structures.

4.3 Training Procedure

Parameters : , batch size. , loss function coefficient. , learning rate. and , momentum coefficients used in Adam. , binarization coefficient.
initialization;
while i < Total iterations do
       Sample ;
       , device samples;
       , forward and adjoint simulations;
       ;
       ;
      
end while
Algorithm 1 Generative neural network-based optimization

The training procedure is shown in Algorithm 1. The Adaptive Moment Estimation (Adam) algorithm, which is a variation of gradient descent, is used to optimize the network parameters . and are two hyperparameters used in Adam kingma2014adam . Initially, with the use of an identity shortcut he2016deep , the device distribution is approximately a uniform distribution over the whole device space . During the training process, is continuously refined and maps more prominently to high-efficiency device subspaces. When the generator is done training, the devices produced from the generator have a high probability to be highly efficient. The final optimal device design is determined by generating a batch of devices from the fully trained generator , simulating each of those devices, and selecting the best one.

4.4 Comparison with gradient-based topology optimization

In gradient-based topology optimization, a large number of local optimizations are used to search for the global optimum. For each run, device patterns are randomly initialized, and a local search in the design space is performed using gradient descent. The highest efficiency device among those optimized devices is taken as the final design. With this approach, many devices get trapped in local optima or saddle points in , and the computational resources used to optimize those devices do not contribute to finding or refining the globally optimal device. Additionally, finding the global optimum in a very high dimensional space can require an exceedingly large number of individual optimization runs. GLOnets are qualitatively different, as they optimize a distribution of devices to perform global optimization. As indicated in Equation 11, each device sample is weighted by the term , which biases generator training towards higher efficiency devices and pushes towards more favorable design subspaces. In this manner, computational resources are not wasted optimizing devices within subspaces possessing low-efficiency local optima.

5 Numerical Experiments

5.1 A Toy Model

We first perform GLOnet-based optimization on a simple test case, where the input and output are two dimensional. The "efficiency" function is defined as:

(12)

This function is non-convex and has many local optima and one global optimum at (0, 0). We use Algorithm 1 to search for the global optimum. Hyperparameters are chosen to be and , and the batch size is fixed throughout network training. The generator is trained for 150 iterations and the generated samples over the course of training are shown as red dots in Figure 3. Initially, the generated "device" distribution is spread out over the space, and it then gradually converges to a cluster located at the global optimum. In the training run shown, no device is trapped in any local optima. Upon training 100 distinct GLOnets, 96 of them successfully produced the globally optimized device.

Figure 3: Results from a toy model test. Samples generated from the generator, shown as red dots, evolve in the space over the course of training.

5.2 Inverse design of metagratings

We next apply our algorithm to the inverse design of 63 different types of metagratings, each with differing operating wavelengths and deflection angles. The wavelengths range from 800 nm to 1200 nm, in increments of 50 nm, and the deflection angles range from 40 degrees to 70 degrees, in increments of 5 degrees. Unlike our conditional GLOnet in Ref. jiang2019global , where many different types of metagratings are simultaneously designed using a single network, we use distinct unconditional GLOnets to design each device type operating for specific wavelength and deflection angle parameters.

Figure 4: (a) Plot of efficiency for devices operating with different wavelength and angle values, designed using gradient-based topology optimization. For each wavelength and angle combination, 500 individual topology optimizations are performed and the highest efficiency device is used for the plot. (b) Plot of efficiency for devices designed using GLOnet-based optimization. For each wavelength and angle combination, 500 devices are generated and the highest efficiency device is used for the plot. (c) Training process of GLOnets. The figure on the left shows the 90th percentile efficiency and average efficiency of the device batch over the course of training. The figure on the right shows the binarization degree of generated devices, which is defined as .
Figure 5: Efficiency histograms of 500 devices designed using gradient-based topology optimization (red) and GLOnet-based optimization (blue). The statistics of device efficiencies in each histogram are also displayed. For most wavelength and angle values, the efficiency distributions from GLOnets are narrower and have higher maximum values compared to those from gradient-based topology optimization.

5.2.1 Implementation details

The hyperparameters we use are and . The batch size is 100. To prevent vanishing gradients when the generated patterns are binarized as , we specify the last activation function to be .

For each combination of wavelength and angle, we train the generator for 1000 iterations. Upon completion of network training, 500 different values of are used to generate 500 different devices. All devices are simulated and the highest efficiency device is taken as the final design.

The network is implemented using the pytorch-1.0.0 package. The forward and adjoint simulations are performed using the Reticolo RCWA hugonin2005reticolo electromagnetic solver in MATLAB. The network is trained on an Nvidia Titan V GPU and 4 CPUs, and it takes 10 minutes for one device optimization. Our code implementation can be found at: https://github.com/jiaqi-jiang/GLOnet.git.

5.2.2 Baseline

We benchmark our method with gradient-based topology optimization. For each design target , we start with 500 random gray-scale vectors and iteratively optimize each device using efficiency gradients calculated from forward and adjoint simulations. A threshold filter binarizes the device patterns. Each initial dielectric distribution is optimized for 200 iterations, and the highest efficiency device among 500 candidates is taken as the final design. The computational budget is set to be the same used for GLOnets training to facilitate a fair comparison.

5.2.3 Results

The efficiencies of the best devices designed using gradient-based topology optimization and GLOnets are shown in Figure 4. 90% of the best devices from GLOnets have higher or the same efficiencies compared to the best devices produced from gradient-based topology optimization. 98% of the best devices from GLOnets have efficiencies within 5% of the best devices from gradient-based topology optimization. For wavelengths and angles for which GLOnets perform worse than gradient-based topology optimization, we can perform multiple network trainings or further tune the batch size and to get better GLOnet results. The efficiency histograms from GLOnets and gradient-based topology optimization, for select wavelength and angle pairs, are displayed in Figure 5. For most cases, efficiency histograms produced from our method have higher average efficiencies and maximal efficiencies, indicating that low-efficiency local optima are often avoided during the training of the generator. Device patterns generated by a well-trained generator are shown in Figure 4d and all look relatively similar, indicating that the generator has collapsed around a particular high-efficiency device topology irrespective of the input values of .

5.2.4 GLOnet Stability

To validate the stability of GLOnet-based optimization, we train eight unconditional GLOnets independently for the same wavelength ( nm) and deflection angle ( degrees). For each trained GLOnet, we generate 500 devices and visualize the top 20 devices in a 2D plane using principle component analysis (PCA) (Figure 6). The principle basis is the same for all eight figures and is calculated using the top 20 devices of each GLOnet for a total of 160 devices. Six of the eight GLOnets converge to the same optimum, which is a device with 97% efficiency, while one GLOnet converges to a nearby optimum, which is a device with 96% efficiency. While we cannot prove that the device with 97% efficiency is globally optimal, the consistent convergence of GLOnet to this single optimum is suggestive that the network is finding the global optimum. At the very least, this demonstration shows that GLOnets have the potential to consistently generate exceptionally high performance devices.

Figure 6: PCA visualization of GLOnet-optimized devices. Eight unconditional GLOnets are trained independently and the top 20 devices of each GLOnet are visualized. The pattern and efficiency of the best device in each plot are shown as insets.

5.2.5 Discussion of hyperparameter and batch size

In principle, approaching zero could be used if the entire design space could be sampled to train the neural network. In this case, the globally optimized structure would be sampled and be the only device that contributes to neural network training, pushing the response of the network towards our desired objective response. However, the design space is immense and infeasible to probe in its entirety. Furthermore, this scenario would lead to the direct readout of the globally optimized device, negating the need to perform an optimization.

In practice, we can only realistically process small batches of devices that comprise a very small fraction of the total design space during network training. For many of these iterations, the best device within each batch will only be in locally optimal regions of the design space. To prevent the network from getting trapped within these local optima, we specify to be finite, which adds noise to the training process. In our loss function, this noise manifests in the form . This exponential expression has a Boltzmann form and can therefore be treated as an effective temperature. In a manner analogous to the process of simulated annealing, can be modulated throughout the training process.

The impact of batch size and on GLOnet performance for nm and degrees is summarized in Figure 7. In Figure 7a, is fixed to be 0.5 and the batch size is varied from 10 to 1000 devices per iteration. When the batch size is too small, the design space is undersampled, which increases the difficulty of finding the global optimum. As the batch size is increased, the performance of the GLOnet starts to saturate such that the design space is oversampled, leading to a waste of computational resources. For this particular GLOnet, a proper batch size that balances optimization capability with resource management is 100.

Figure 7b summarizes the impact of on GLOnet training, given a fixed batch size of 100 devices. The plot indicates that a proper range of that produces the highest efficiency devices is between 0.5 to 1.0. When is less than 0.5, there is insufficient noise in the training process and the network gets more easily trapped within local optima, particularly early in the training process. When becomes larger than 1, the performance of the GLOnet begins to deteriorate because low efficiency devices contribute more significantly in the training process, leading to excess noise.

The optimal batch size and values are highly problem dependent and require tuning for each optimization objective. For example, proper GLOnet optimization within a design space with relatively few local optima can be achieved with relatively small batch sizes and small values of . The proper selection of these hyperparameters is not intuitive and requires experience and parametric sweeps.

Figure 7: Performance of the unconditional GLOnet for different values of (a) batch size and (b) .

6 Comparison with evolution strategies (ES)

Evolutionary strategies represent classical global optimization strategies. One such algorithm is the genetic algorithm, which have been applied to many types of photonic design problems, including metasurface design egorov2017genetically . Compared to our approach, genetic algorithms are not efficient and require many thousands of iterations to search for even a simple optimal device structure. The difficulty is due to the complicated relationship between optical response and device geometry, governed by Maxwell’s equations. Methods like ours, which incorporate gradients, can more efficiently locate favorable regions of the design space because gradients provide clear, non-heuristic instruction on how to improve device performance.

Another ES algorithm is the Covariance Matrix Adaptation Evolution Strategy (CMA-ES), which is a probability distribution-based ES algorithm. CMA-ES assumes an explicit form of the probability distribution of the design variables (e.g. multivariate normal distribution), which is typically parameterized by several terms. Our algorithm has two main differences compared with CMA-ES. First, instead of defining an explicit probability distribution, we define an explicit generative model parameterized by the network parameters. The probability distribution in our algorithm is therefore implicit and has no assumed form. This is important as there is no reason why the probability distributions of the design variables should have a simple, explicitly defined form such as the multivariate normal distribution. Second, CMA-ES is derivative-free, but our algorithm uses gradients and is therefore more efficient at generating device populations in the desirable parts of the design space.

7 Conclusions and Future Directions

In this paper, we present a generative neural network-based global optimization algorithm for metasurface design. Instead of optimizing many devices individually, which is the case for gradient-based topology optimization, we reframe the global optimization problem as the training of a generator. The efficiency gradients of all devices generated each training iteration are used to collectively improve the performance of the generator and map the noise input to favorable regions of the device subspace.

An open topic of future study is understanding how to properly select and tune the network hyperparameters dynamically during network training. We anticipate that, as the distribution of generated devices converges to a narrow range of geometries over the course of network training, the batch size can be dynamically decreased, leading to computational savings. We also hypothesize that dynamically decreasing can help further stabilize the GLOnet training process. These variations in batch size and can be predetermined prior to network training or be dynamically modified using feedback during the training process.

We are also interested in applying our algorithm to more complex systems, such as 2D or 3D metasurfaces, multi-function metasurfaces, and other photonics design problems. A deeper understanding of loss function engineering will be necessary for multi-function metasurfaces design, which requires optimizing multiple objectives simultaneously. We envision that our algorithm has strong potential to solve inverse design problems in other domains of the physical sciences, such as mechanics and electronics.

Acknowledgement

The simulations were performed in the Sherlock computing cluster at Stanford University. This work was supported by the U.S. Air Force under Award Number FA9550-18-1-0070, the Office of Naval Research under Award Number N00014-16-1-2630, and the David and Lucile Packard Foundation.

References

  • (1) B. Jalali and S. Fathpour. Silicon photonics. Journal of Lightwave Technology, 24(12):4600–4615, 2006.
  • (2) Patrice Genevet, Federico Capasso, Francesco Aieta, Mohammadreza Khorasaninejad, and Robert Devlin. Recent advances in planar optics: from plasmonic to dielectric metasurfaces. Optica, 4(1):139–152, 2017.
  • (3) Sean Molesky, Zin Lin, Alexander Y. Piggott, Weiliang Jin, Jelena Vuckovic, and Alejandro W. Rodriguez. Inverse design in nanophotonics. Nature Photonics, 12(11):659–670, 2018.
  • (4) Jakob S. Jensen and Ole Sigmund. Topology optimization for nano-photonics. Laser & Photonics Reviews, 5(2):308–321, 2011.
  • (5) Sawyer D. Campbell, David Sell, Ronald P. Jenkins, Eric B. Whiting, Jonathan A. Fan, and Douglas H. Werner. Review of numerical optimization techniques for meta-device design [Invited]. Optical Materials Express, 9(4):1842–1863, 2019.
  • (6) Ole Sigmund and Kurt Maute. Topology optimization approaches. Structural and Multidisciplinary Optimization, 48(6):1031–1055, 2013.
  • (7) Ole Sigmund. On the design of compliant mechanisms using topology optimization. Journal of Structural Mechanics, 25(4):493–524, 1997.
  • (8) Christopher M Lalau-Keraly, Samarth Bhargava, Owen D Miller, and Eli Yablonovitch. Adjoint shape optimization applied to electromagnetic design. Optics express, 21(18):21693–21701, 2013.
  • (9) Evan W. Wang, David Sell, Thaibao Phan, and Jonathan A. Fan. Robust design of topology-optimized metasurfaces. Optical Materials Express, 9(2):469–482, 2019.
  • (10) Jakob S. Jensen and Ole Sigmund. Systematic design of photonic crystal structures using topology optimization: Low-loss waveguide bends. Applied Physics Letters, 84(12):2022–2024, 2004.
  • (11) Alexander Y. Piggott, Jesse Lu, Konstantinos G. Lagoudakis, Jan Petykiewicz, Thomas M. Babinec, and Jelena Vučković. Inverse design and demonstration of a compact and broadband on-chip wavelength demultiplexer. Nature Photonics, 9(6):374–377, 2015.
  • (12) Tyler W. Hughes, Momchil Minkov, Ian A. D. Williamson, and Shanhui Fan. Adjoint Method and Inverse Design for Nonlinear Nanophotonic Devices. ACS Photonics, 5(12):4781–4787, 2018.
  • (13) David Sell, Jianji Yang, Sage Doshay, Rui Yang, and Jonathan A Fan. Large-angle, multifunctional metagratings based on freeform multimode geometries. Nano letters, 17(6):3752–3757, 2017.
  • (14) Jianji Yang and Jonathan A. Fan. Analysis of material selection on dielectric metasurface performance. Optics Express, 25(20):23899–23909, 2017.
  • (15) David Sell, Jianji Yang, Sage Doshay, and Jonathan A. Fan. Periodic Dielectric Metasurfaces with High-Efficiency, Multiwavelength Functionalities. Advanced Optical Materials, 5(23), 2017.
  • (16) Jianji Yang, David Sell, and Jonathan A. Fan. Freeform Metagratings Based on Complex Light Scattering Dynamics for Extreme, High Efficiency Beam Steering. Annalen der Physik, 530(1), 2018.
  • (17) David Sell, Jianji Yang, Evan W. Wang, Thaibao Phan, Sage Doshay, and Jonathan A. Fan. Ultra-High-Efficiency Anomalous Refraction with Dielectric Metasurfaces. ACS Photonics, 5(6):2402–2407, 2018.
  • (18) Zin Lin, Benedikt Groever, Federico Capasso, Alejandro W. Rodriguez, and Marko Lončar. Topology-optimized multilayered metaoptics. Physical Review Applied, 9:044030, 2018.
  • (19) David Wang Evan W. Doshay Sage Edee Kofi Yang Jianji Fan Jonathan A. Phan, Thaibao Sell. High-efficiency, large-area, topology-optimized metasurfaces. Light: Science & Applications, 8(1):48, 2019.
  • (20) Jianji Yang and Jonathan A. Fan. Topology-optimized metasurfaces: impact of initial geometric layout. Optics Letters, 42(16):3161–3164, 2017.
  • (21) Jiaqi Jiang and Jonathan A. Fan. Global optimization of dielectric metasurfaces using a physics-driven neural network. Nano Letters, 19(8):5366–5372, 2019. PMID: 31294997.
  • (22) Qi-Jun Zhang, K. C. Gupta, and V. K. Devabhaktuni. Artificial neural networks for rf and microwave design - from theory to practice. IEEE Transactions on Microwave Theory and Techniques, 51(4):1339–1350, April 2003.
  • (23) G. N. Malheiros-Silveira and H. E. Hernandez-Figueroa. Prediction of dispersion relation and pbgs in 2-d pcs by using artificial neural networks. IEEE Photonics Technology Letters, 24(20):1799–1801, Oct 2012.
  • (24) D. R. Luna, C. F. L. Vasconcelos, and R. M. S. Cruz. Using natural optimization algorithms and artificial neural networks in the design of effective permittivity of metamaterials. In 2013 SBMO/IEEE MTT-S International Microwave Optoelectronics Conference (IMOC), pages 1–4, Aug 2013.
  • (25) P. H. da F. Silva, R. M. S. Cruz, and A. G. d’Assuncao. Blending pso and ann for optimal design of fss filters with koch island patch elements. IEEE Transactions on Magnetics, 46(8):3010–3013, Aug 2010.
  • (26) John Peurifoy, Yichen Shen, Li Jing, Yi Yang, Fidel Cano-Renteria, Brendan G DeLacy, John D Joannopoulos, Max Tegmark, and Marin Soljačić. Nanophotonic particle simulation and inverse design using artificial neural networks. Science advances, 4(6):eaar4206, 2018.
  • (27) Sandeep Inampudi and Hossein Mosallaei. Neural network based design of metagratings. Applied Physics Letters, 112(24):241102, 2018.
  • (28) Wei Ma, Feng Cheng, and Yongmin Liu. Deep-learning-enabled on-demand design of chiral metamaterials. ACS nano, 12(6):6326–6334, 2018.
  • (29) Dianjing Liu, Yixuan Tan, Erfan Khoram, and Zongfu Yu. Training deep neural networks for the inverse design of nanophotonic structures. ACS Photonics, 5(4):1365–1369, 2018.
  • (30) Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in neural information processing systems, pages 2672–2680, 2014.
  • (31) Zhaocheng Liu, Dayu Zhu, Sean P Rodrigues, Kyu-Tae Lee, and Wenshan Cai. Generative model for the inverse design of metasurfaces. Nano letters, 18(10):6570–6576, 2018.
  • (32) Jiaqi Jiang, David Sell, Stephan Hoyer, Jason Hickey, Jianji Yang, and Jonathan A Fan. Free-form diffractive metagrating design based on generative adversarial networks. ACS nano, 13(8):8872–8878, 2019.
  • (33) Sunae So and Junsuk Rho. Designing nanophotonic structures using conditional deep convolutional generative adversarial networks. Nanophotonics, 2019.
  • (34) Andrew Brock, Jeff Donahue, and Karen Simonyan. Large scale gan training for high fidelity natural image synthesis. arXiv preprint arXiv:1809.11096, 2018.
  • (35) Tero Karras, Timo Aila, Samuli Laine, and Jaakko Lehtinen. Progressive growing of gans for improved quality, stability, and variation. arXiv preprint arXiv:1710.10196, 2017.
  • (36) Jun-Yan Zhu, Zhoutong Zhang, Chengkai Zhang, Jiajun Wu, Antonio Torralba, Josh Tenenbaum, and Bill Freeman. Visual object networks: Image generation with disentangled 3d representations. In Advances in Neural Information Processing Systems, pages 118–129, 2018.
  • (37) Jun-Yan Zhu, Taesung Park, Phillip Isola, and Alexei A Efros. Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the IEEE international conference on computer vision, pages 2223–2232, 2017.
  • (38) Christian Ledig, Lucas Theis, Ferenc Huszár, Jose Caballero, Andrew Cunningham, Alejandro Acosta, Andrew Aitken, Alykhan Tejani, Johannes Totz, Zehan Wang, et al. Photo-realistic single image super-resolution using a generative adversarial network. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4681–4690, 2017.
  • (39) Martin A. Green. Self-consistent optical parameters of intrinsic silicon at 300k including temperature coefficients. Solar Energy Materials and Solar Cells, 92(11):1305 – 1310, 2008.
  • (40) Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434, 2015.
  • (41) Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
  • (42) Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016.
  • (43) JP Hugonin and Ph Lalanne. Reticolo Software for Grating Analysis. Institut d’Optique: Palaiseau, France, 2005.
  • (44) Victor Egorov, Michal Eitan, and Jacob Scheuer. Genetically optimized all-dielectric metasurfaces. Optics express, 25(3):2583–2593, 2017.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
398397
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description