Learning to Design Circuits
Abstract
Analog IC design relies on human experts to search for parameters that satisfy circuit specifications with their experience and intuitions, which is highly labor intensive, time consuming and suboptimal. Machine learning is a promising tool to automate this process. However, supervised learning is difficult for this task due to the low availability of training data: 1) Circuit simulation is slow, thus generating largescale dataset is timeconsuming; 2) Most circuit designs are propitiatory IPs within individual IC companies, making it expensive to collect largescale datasets. We propose Learning to Design Circuits (L2DC) to leverage reinforcement learning that learns to efficiently generate new circuits data and to optimize circuits. We fix the schematic, and optimize the parameters of the transistors automatically by training an RL agent with no prior knowledge about optimizing circuits. After iteratively getting observations, generating a new set of transistor parameters, getting a reward, and adjusting the model, L2DC is able to optimize circuits. We evaluate L2DC on two transimpedance amplifiers. Trained for a day, our RL agent can achieve comparable or better performance than human experts trained for a quarter. It first learns to meet hardconstraints (eg. gain, bandwidth), and then learns to optimize goodtohave targets (eg. area, power). Compared with grid searchaided human design, L2DC can achieve higher sample efficiency with comparable performance. Under the same runtime constraint, the performance of L2DC is also better than Bayesian Optimization.
Learning to Design Circuits
Hanrui Wang EECS Massachusetts Institute of Technology Cambridge, MA 02139 hanrui@mit.edu Jiacheng Yang^{†}^{†}thanks: Equal Contribution. EECS Massachusetts Institute of Technology Cambridge, MA 02139 jcyoung@mit.edu HaeSeung Lee EECS Massachusetts Institute of Technology Cambridge, MA 02139 hslee@mtl.mit.edu Song Han EECS Massachusetts Institute of Technology Cambridge, MA 02139 songhan@mit.edu
noticebox[b]32nd Conference on Neural Information Processing Systems (NIPS 2018), Montréal, Canada.\end@float
1 Introduction
Analog circuits process continuous signals, which exist in almost all electronics systems and provide important function of interfacing realworld signals with digital systems. Analog IC design has a large number of circuit parameters to tune, which is highly difficult for several reasons. First, the relationship between the parameters and performance is subtle and uncertain. Designers have few explicit patterns or deterministic rules to follow. Analog Circuits Octagon [1] characterizes strong coupled relations among performance metrics. Therefore, improving one aspect always incurs deterioration of another. A proper and intelligent tradeoff between those metrics requires rich design experience and intuitions. Moreover, simulation of circuits is slow, especially for complex circuits such as ADCs, DACs, and PLLs. That makes the random search or exhaustive search impractical.
There exist several methods to automate the circuit parameter search process. Particle swarm intelligence [2] is a popular approach. However, it is easy to fall into local optima in highdimensional space and also suffers from a low convergence rate. Moreover, simulated annealing [3] is also utilized, but repeatedly annealing can be very slow and easy to fall into the local minimum. Although evolutionary algorithm [4] can be used to solve the optimization problem, the process is stochastic and lacks reproducibility. In [5], researchers also proposed modelbased simulationbased hybrid method and utilized Bayesian Optimization [6] to search for parameter sets. However, the computational complexity of BO is prohibitive, making the runtime very long.
Machine learning is another promising method to address the above difficulties. However, supervised learning requires large scale dataset which consumes long time to generate. Besides, most of the analog IPs are proprietary, not available to the public. Therefore, we introduce L2DC method, which leverages reinforcement learning (RL) to generate circuits data by itself and learns from the data to search for best parameters. We train our RL agents from scratch without giving it any rules about circuits. In each iteration, the agent obtains observations from the environment, produces an action (a set of parameters) to the circuit simulator environment, and then receives a reward as a function of gain, bandwidth, power, area, etc. The observations consist of DC operating points, AC magnitude and phase responses and also transistors’ states, obtained from simulation tools such as Hspice and Spectre. The reward is defined to optimize the desired Figures of Merits (FOM) composed of several performance metrics. By maximizing the reward, RL agent can optimize the circuit parameters.
Experimental results on two different circuits environments show that L2DC can achieve similar or better performance than human experts, Bayesian Optimization and random search. L2DC also has higher sample efficiency compared to grid search aided human expert design. The contributions of this paper are: 1) A reinforcement learning based analog circuit optimization method. It is a learningbased method that updates optimization strategy by itself with no need for empirical rules; 2) A sequencetosequence model to generate circuit parameters, which serves as the actor in the RL agent; 3) Our method achieves more than higher sample efficiency comparing to grid search based human design and gets comparable or better results. Under the same runtime constraint, our method can get better circuit performance than Bayesian Optimization.
2 Methodology
2.1 Problem Definition
The design specification of an analog IC contain hard constraints and optimization targets. For hard constraints, designers only need to meet them with no need for any overoptimization. For optimization targets, the rule is “the larger the better” or “the smaller the better”. For optimization targets, there also exist thresholds specifying the minimum performance designers need to achieve.
Formally, we denote as the parameters of components, as the specs to be satisfied, including hard constraints and thresholds for optimization targets. We assume the mapping is the circuit simulator which computes performance metrics given parameters. We define a ratio for each spec to measure to which extent the circuit satisfies the spec. If the metric should be larger than the spec , . Otherwise . Then analog IC optimization can be formalized as a constrained optimization problem, that is, to maximize the sum of of the optimization targets with all of the hard constraints being satisfied.
2.2 MultiStep Simulation Environment
We present an overview of our L2DC method in Figure 1. L2DC is able to find the optimized parameters by several epochs of interactions with the simulation environment. Each epoch contains steps. For each step , the RL agent takes an observation from the environment (it can be regarded as state in our environments), outputs an action and then receives an reward . By learning from history, the RL agent is able to optimize the expected cumulative reward.
The simulation of a circuit is essentially an onestep process, meaning that the state information including voltage and current of the circuit’s environment cannot be directly facilitated. To effectively feed the information to RL agent, we purpose the following multistep environment.
Observations
As illustrated in Figure 2, at each step , the observation is separated into global observations and each transistor’s own observations. The global observations contain highlevel features of the simulation results, including the DC operating point voltage and current at each node, AC amplitude/phase responses and a onehot vector encoding the current environment step. Local observations are the features of the th transistor, containing , capacitance between transistor pins and so on. The initial values of all global observations and local transistor observations are set to zeros.
Action Space
Supposing there are parameters to be searched, the reinforcement learning agent would output a normalized joint action as the predicted parameters of each component at each step . Then the actions are scale up to according to the maximum/minimum constraints of each parameter where contains the widths and lengths of transistors, capacitance of capacitors and resistance of resistors.
Reward Function
After the reinforcement learning agent outputs circuit parameters , the simulation environment will benchmark on these parameters, generating simulation results of various performance metrics. We gather the metrics together as a scalar score . Denote as the sum of of those hard constraints and of those optimization target. Then is defined as
(1) 
When the hardconstraints in the spec are not satisfied, DDPG will optimize hardconstraint requirements and optionally optimize optimization target requirements according to the coefficient . When all the hardconstraints are satisfied, DDPG will only focus on optimization targets. and are two constants. They are used to make sure the scores after hardconstraints are satisfied are higher than those before hardconstraints are satisfied. To fit the reinforcement learning framework where the cumulative reward is maximized, the reward for the th step is defined as the finite .
2.3 DDPG Agent
As shown in Figure 2, the DDPG [7, 8] actor forms an encoderdecoder framework [9] which translates the observations to the actions. The order which we follow to feed the observations of transistors, is the order of signal propagation through a certain path from input ports to output ports, intuited by fact that the former components influence latter ones. The decoder generates transistor and in the same order as well. To explore the search space, we apply truncated uniform distribution as noise to each output. Namely, , where denotes the noise volume. Besides, we also find parameter noise [10] improves the performance. For critic network, we simply use a multilayer perceptron estimating the expected cumulative reward of the current policy.
3 Experimental Results
3.1 ThreeStage Transimpedence Amplifier






Score  


–  –  90.0  20.0  3.00  –  –  
Human Expert [12]  10,000,000  1  90.1  20.2  1.37  211  0.00  
Random  40,000  –  57.3  20.7  1.37  146  0.02  

1,160 ^{†}^{†}†The time complexity of Bayesian Optimization is cubic to the number of samples and the space complexity is square to the number of samples. Therefore we executed BO for only 1,160 samples (the running time is the same as random and our method).  –  72.5  21.1  4.25  130  0.01  
Ours (DDPG)  40,000  250  92.5  20.7  2.50  90  2.88 
The first environment is a threestage transimpedence amplifier from the final project of Stanford EE214A [11]. The schematic of the circuit is shown in Figure 3. We compare L2DC with random search, an grid search aided human expert design proposed by a PhD student in EE214A class as well as MACE [13], a Bayesian Optimization (BO) based algorithm for analog circuit optimization. The batch size of MACE is 10 and we use 50 samples for initialization.
We run DDPG, BO and random search, each for about 40 hours. The comparison results are shown in Table 1. The learning curves are shown in Figure 3. Random search is not able to meet the bandwidth hardconstraint because there are seventeen transistors making the environment very complex and design space very large. BO is also unable to meet the bandwidth and power hardconstraints. DDPG’s design can meet all the hardconstraints and has slightly higher power but smaller gate area than the human expert design, so it achieves highest score. The power consumption of DDPG, though slightly higher than human design, can satisfy the course spec constraint. Moreover, the number of simulations of DDPG is fewer than the grid search aided human design, demonstrating high sample efficiency of our L2DC method.
3.2 Twostage Transimpedence Amplifier








Score  


–  –  19.3  57.6  1.000  18.0  –  maximize  –  

1,289,618  1  18.6  57.7  0.927  8.11  6.17  5.95  0.00  
Random  50,000  –  19.8  58.0  0.488  4.39  2.93  5.60  0.08  

880  –  19.6  58.6  0.629  4.24  5.69  5.16  0.15  
Ours (DDPG)  50,000  25  19.2  58.1  0.963  3.18  2.61  5.78  0.03 
The second environment is a twostage transimpedence amplifier. The schematic of the circuit is shown in Figure 4. The circuit is from Stanford EE214B design contest [14]. The contest specifies noise, gain, peaking, power as hardconstraints and bandwidth as optimization target.
We run DDPG, BO and random search, each for about 30 hours. In Table 2, we compare DDPG result with random search, BO, and human expert design which applies a methodology to conduct design space search. The learning curves are shown in Figure 4. Human expert design achieves bandwidth with all hard constraints being satisfied therefore receives the “Most Innovative Design” award. Random cannot meet the noise hard constraints. BO cannot meet the noise hard constraint either. DDPG meets all the hard constraints and achieves bandwidth which is already of the human result, while the sample efficiency of L2DC is better than human expert design.
4 Discussion
As shown in Figure 5, we plot the curves of performance metrics v.s. learning steps in the threestage transimpedence amplifier. The vertical dash line indicates the step when hardconstraints are satisfied. We can observe that power goes up and then goes down; bandwidth goes up and then stays constant; gain goes up and then remains. Therefore, from the RL agent’s point of view, it firstly sacrifice power to increase hardconstraints (bandwidth and gain). After those two metrics are met, RL agents tried to keep the bandwidth and gain constant and starts to optimize power which is a soft optimization target. From this phenomenon, we can infer that RL agent has learnt some strategies in analog circuit optimization.
5 Conclusion
We propose L2DC that leverages reinforcement learning to automatically optimize circuit parameters. Comparing to supervised learning, it does not need large scale training dataset which is difficult to obtain due to long simulation time and IP issues. We evaluate our methods on two different transimpedance amplifiers circuits. After iteratively getting observations, generating a new set of parameters by itself, getting a reward, and adjusting the model, L2DC is able to design circuits with better performance than both random search, Bayesian Optimization and human experts. L2DC works well on both twostage transimpedence amplifier as well as complicated threestage amplifier, demonstrating its generalization ability. Compared with grid search aided human design, L2DC can achieve comparable performance, with about higher sample efficiency. Under the same runtime constraint, L2DC can also get better circuit performance than Bayesian Optimization.
6 Acknowledgements
We sincerely thank MIT Quest for Intelligence, MITIBM Watson Lab for supporting our research. We thank Bill Dally for the enlightening talk at ISSCC’18. We thank Amazon for generously providing us the cloud computation resource.
References
 [1] Razavi Behzad. Design of analog cmos integrated circuits. International Edition, 400, 2001.
 [2] Prakash Kumar Rout, Debiprasad Priyabrata Acharya, and Ganapati Panda. A multiobjective optimization based fast and robust design methodology for low power and low phase noise current starved vco. IEEE Transactions on Semiconductor Manufacturing, 27(1):43–50, 2014.
 [3] Rodney Phelps, Michael Krasnicki, Rob A Rutenbar, L Richard Carley, and James R Hellums. Anaconda: simulationbased synthesis of analog circuits via stochastic pattern search. IEEE Transactions on ComputerAided Design of Integrated Circuits and Systems, 19(6):703–717, 2000.
 [4] Bo Liu, Francisco V Fernández, Georges Gielen, R CastroLópez, and Elisenda Roca. A memetic approach to the automatic design of highperformance analog integrated circuits. ACM Transactions on Design Automation of Electronic Systems (TODAES), 14(3):42, 2009.
 [5] Wenlong Lyu, Pan Xue, Fan Yang, Changhao Yan, Zhiliang Hong, Xuan Zeng, and Dian Zhou. An efficient bayesian optimization approach for automated optimization of analog circuits. IEEE Transactions on Circuits and Systems I: Regular Papers, 65(6):1954–1967, 2018.
 [6] Martin Pelikan, David E Goldberg, and Erick CantúPaz. Boa: The bayesian optimization algorithm. In Proceedings of the 1st Annual Conference on Genetic and Evolutionary ComputationVolume 1, pages 525–532. Morgan Kaufmann Publishers Inc., 1999.
 [7] David Silver, Guy Lever, Nicolas Heess, Thomas Degris, Daan Wierstra, and Martin A Riedmiller. Deterministic Policy Gradient Algorithms. ICML, 2014.
 [8] Timothy P Lillicrap, Jonathan J Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, and Daan Wierstra. Continuous control with deep reinforcement learning. CoRR, cs.LG, 2015.
 [9] Ilya Sutskever, Oriol Vinyals, and Quoc V Le. Sequence to Sequence Learning with Neural Networks. arXiv.org, September 2014.
 [10] Matthias Plappert, Rein Houthooft, Prafulla Dhariwal, Szymon Sidor, Richard Y. Chen, Xi Chen, Tamim Asfour, Pieter Abbeel, and Marcin Andrychowicz. Parameter space noise for exploration. CoRR, abs/1706.01905, 2017.
 [11] Robert Dutton and Boris Murmann. Stanford ee214a  fundamentals of analog integrated circuit design final project.
 [12] Danny Bankman. Stanford ee214a  fundamentals of analog integrated circuit design, design project report.
 [13] Wenlong Lyu, Fan Yang, Changhao Yan, Dian Zhou, and Xuan Zeng. Batch bayesian optimization via multiobjective acquisition ensemble for automated analog circuit design. In International Conference on Machine Learning, pages 3312–3320, 2018.
 [14] Boris Murmann. Stanford ee214b  advanced analog integrated circuits design, design contest.
 [15] Danny Bankman. Stanford ee214b  advanced analog integrated circuits design, design contest ’most innovative design award’.