MathQA: Towards Interpretable Math Word Problem Solving
with OperationBased Formalisms
Abstract
We introduce a largescale dataset of math word problems and an interpretable neural math problem solver that learns to map problems to operation programs. Due to annotation challenges, current datasets in this domain have been either relatively small in scale or did not offer precise operational annotations over diverse problem types. We introduce a new representation language to model precise operation programs corresponding to each math problem that aim to improve both the performance and the interpretability of the learned models. Using this representation language, our new dataset, MathQA, significantly enhances the AQuA dataset with fullyspecified operational programs. We additionally introduce a neural sequencetoprogram model enhanced with automatic problem categorization. Our experiments show improvements over competitive baselines in our MathQA as well as the AQuA datasets. The results are still significantly lower than human performance indicating that the dataset poses new challenges for future research. Our dataset is available at: https://mathqa.github.io/mathQA/.
1 Introduction
Answering math word problems poses unique challenges for logical reasoning over implicit or explicit quantities expressed in text. Math wordproblem solving requires extraction of salient information from natural language narratives. Automatic solvers must transform the textual narratives into executable meaning representations, a process that requires both high precision and, in the case of story problems, significant world knowledge.
As shown by the geometry question in Figure 1, math word problems are generally narratives describing the progress of actions and relations over some entities and quantities. The operation program underlying the problem in Figure 1 highlights the complexity of the problemsolving task. Here, we need the ability to deduce implied constants (pi) and knowledge of domainspecific formulas (area of the square).
In this paper, we introduce a new operationbased representation language for solving math word problems. We use this representation language to construct MathQA^{1}^{1}1The dataset is available at: https://mathqa.github.io/mathQA/, a new largescale, diverse dataset of 37k English multiplechoice math word problems covering multiple math domain categories by modeling operation programs corresponding to word problems in the AQuA dataset Ling et al. (2017). We introduce a neural model for mapping problems to operation programs with domain categorization.
Most current datasets in this domain are small in scale Kushman et al. (2014) or do not offer precise operational annotations over diverse problem types Ling et al. (2017). This is mainly due to the fact that annotating math word problems precisely across diverse problem categories is challenging even for humans, requiring background math knowledge for annotators. Our representation language facilitates the annotation task for crowdsourcing and increases the interpretability of the proposed model.
Our sequencetoprogram model with categorization trained on our MathQA dataset outperforms previous stateoftheart on the AQuA test set in spite of the smaller training size. These results indicate the superiority of our representation language and the quality of the formal annotations in our dataset. Our model achieves competitive results on MathQA, but is still lower than human performance indicating that the dataset poses new challenges for future research. Our contributions are as follows:

We introduce a largescale dataset of math word problems that are densely annotated with operation programs

We introduce a new representation language to model operation programs corresponding to each math problem that aim to improve both the performance and the interpretability of the learned models.

We introduce a neural architecture leveraging a sequencetoprogram model with automatic problem categorization, achieving competitive results on our dataset as well as the AQuA dataset
2 Background and Related Work
LargeScale Datasets Several largescale math word problem datasets have been released in recent years. These include Dolphin18K Huang et al. (2016), Math23K Wang et al. (2017) and AQuA. We choose the 2017 AQUARAT dataset to demonstrate use of our representation language on an existing largescale math word problem solving dataset. The AQuA provides over 100K GRE and GMATlevel math word problems. The problems are multiple choice and come from a wide range of domains.
The scale and diversity of this dataset makes it particularly suited for use in training deeplearning models to solve word problems. However there is a significant amount of unwanted noise in the dataset, including problems with incorrect solutions, problems that are unsolvable without bruteforce enumeration of solutions, and rationales that contain few or none of the steps required to solve the corresponding problem. The motivation for our dataset comes from the fact we want to maintain the challenging nature of the problems included in the AQuA dataset, while removing noise that hinders the ability of neuralized models to learn the types of signal neccessary for problemsolving by logical reasoning.
Additional Datasets Several smaller datasets have been compiled in recent years. Most of these works have focused on algebra word problems, including MaWPS KoncelKedziorski et al. (2016), Alg514 Kushman et al. (2014), and DRAW1K Upadhyay and Chang (2017). Many of these datasets have sought to align underlying equations or systems of equations with word problem text. While recent works like Liang et al. (2018); Locascio et al. (2016) have explored representing math word problems with logical formalisms and regular expressions, our work is the first to provide welldefined formalisms for representing intermediate problemsolving steps that are shown to be generalizable beyond algebra problems.
Solving with Handcrafted Features Due to sparsity of suitable data, early work on math word problem solving used patternmatching to map word problems to mathematical expressions Bobrow (1964); Charniak (1968, 1969), as well as nonneural statistical modeling and semantic parsing approaches Liguda and Pfeiffer (2012).
Some effort has been made on parsing the problems to extract salient entities Hosseini et al. (2017). This approach views entities as containers, which can be composed into an equation tree representation. The equation tree representation is changed over time by operations implied by the problem text.
Many early works focused on solving addition and subtraction problems Briars and Larkin (1984); Dellarosa (1986); Bakman (2007). As word problems become more diverse and complex, we require models capable of solving simultaneous equation systems. This has led to an increasing focus on finding semantic alignment of math word problems and mentions of numbers Roy and Roth (2018). The main idea behind those work is to find all possible patterns of equations and rank them based on the problem.
Neural Word Problem Solvers Following the increasing availability of largescale datasets like AQuA, several recent works have explored deep neural approaches to math word problem solving Wang et al. (2017). Our representation language is motivated by exploration of using intermediate formalisms in the training of deep neural problemsolving networks, as is done in the work of Huang et al. (2018b) to solve problems with sequence to sequence models. While this work focused on singlevariable arithmetic problems, our work introduces a formal language of operations for covering more complex multivariate problems and systems of equations.
Interpretability of Solvers While the statistical models with handcrafted features introduced by prior work are arguably “interpretable” due to the relative sparsity of features as well as the clear alignments between inputs and outputs, new neuralized approaches present new challenges to model interpretability of math word problem solvers Huang et al. (2018a). While this area is relatively unexplored, a prior approach to increasing robustness and interpretability of math word problemsolving models looks at using an adversarial dataset to determine if models are learning logical reasoning or exploiting dataset biases through patternmatching Liang et al. (2018).
3 Representing Math Word Problems
A math word problem consists of a narrative that grounds mathematical formalisms in realworld concepts. Solving these problems is a challenge for both humans and automatic methods like neural networkbased solvers, since it requires logical reasoning about implied actions and relations between entities. For example, in Figure 2, operations like addition and division are not explicitly mentioned in the word problem text, but they are implied by the question.
As we examine the context of a math word problem, we have to select arguments for operations based on which values are unimportant for solving the problem and which are salient. In Figure 2, the numeric value “100” appears in the context but does not appear in the underlying equation.
By selecting implied operations and arguments, we can generate a program of intermediate steps for solving a math word problem. Each step involves a mathematical operation and its related arguments. In Figure 2, there are three addition operations and one division. As illustrated in the figure, operations can be dependant to the previous ones by the values they use as arguments. Every math word problem can be solved by sequentially executing these programs of dependent operations and arguments.
We define formalisms for expressing these sequential operation programs with a domainaware representation language. An operation program in our representation language is a sequence with operations. The general form is shown below. Each operation takes in a list of arguments a of length :
(1) 
Given this general definition, the problem in Figure 2 has the following representation^{2}^{2}2Here the arguments , and are the outputs of operations 1, 2 and 3 respectively.:
(2) 
Our representation language consists of 58 operations and is designed considering the following objectives.

Correctness Operation programs should result in the correct solution when all operations are executed.

Domainawareness Operation problems should make use of both math knowledge and domain knowledge associated with subfields like geometry and probability to determine which operations and arguments to use.

Human interpretability Each operation and argument used to obtain the correct solution should relate to part of the input word problem context or a previous step in the operation program.
Learning logical forms has led to success in other areas of semantic parsing Cheng et al. (2017); Zelle and Mooney (1996); Zettlemoyer and Collins (2007, 2005) and is a natural representation for math word problemsolving steps. By augmenting our dataset with these formalisms, we are able to cover most types of math word problems^{3}^{3}3We omit highorder polynomials and problems where the solutions are entirely nonnumeric.. In contrast to other representations like simultaneous equations, our formalisms ensure that every problemsolving step is aligned to a previous one. There are three advantages to this approach. First, we use this representation language to provide human annotators with clear steps for how a particular problem should be solved with math and domain knowledge. Second, our formalisms provide neural models with a continuous path to execute operations for problems with systems of equations, instead of forcing models to align equations before problem solving. This reduces the possibility of intermediate errors being propagated and leading to a incorrect solution. Finally, by having neural models generate a solution path in our representation language before computing the final solution, we are able to reconstruct the logical hops inferred by the model output, increasing model interpretability.
4 Dataset
Our dataset (called MathQA) consists of 37,200 math word problems, corresponding lists of multiplechoice options and aligned operation programs. We use problems in the AQuA dataset and carefully annotate those problems with formal operation programs.
Math problems are first categorized into math domains using term frequencies (more details in Section 5.2). These domains are used to prune the search space of possible operations to align with the word problem text. Figure 3 shows the categorybased hierarchies for operation formalisms.
We use crowdsourcing to carefully align problems with operation programs (Section 4.1). Table 1 shows overall statistics of the dataset.^{4}^{4}4We also experimented with an automatic dynamic programming approach to annotation that generates operation programs for problems using numbers in the AQuA rationales. Due to the noise in the rationales, only of those problems pass our human validation. This is mainly due to the fact that the rationales are not complete programs and fail to explicitly describe all important numbers and operations required to solve the problem. To maintain interpretability of operation paths, we did not include automatic annotations from our dataset and focus on operation programs derived by crowdsourcing.
Category  #Prob.  Avg #words  #Vocab  Avg #ops 
Geometry  3,316  34.3  1,839  4.8 
Physics  9,830  37.3  3,340  5.0 
Probability  663  38.9  937  5.0 
GainLoss  4,377  34.3  1,533  5.7 
General  17,796  38.6  6,912  5.1 
Other  1,277  31.3  1,425  4.7 
All  37,259  37.9  6,664  5.3 
4.1 Annotation using Crowd Workers
Annotating GRE level math problems can be a challenging and time consuming task for humans. We design a dynamic annotation platform to annotate math word problems with formal operation programs. Our annotation platform has the following properties: (a) it provides basic math knowledge to annotators, (b) it is dynamic by iteratively calculating intermediate results after an operation submission, and (c) it employs quality control strategies.
Dynamic Annotation Platform
The annotators are provided with a problem description, a list of operations related to the problem category, and a list of valid arguments. They iteratively select operations and arguments until the problem is solved.

Operation Selection The annotators are instructed to sequentially select an operation from the list of operations in the problem category. Annotators are provided with math knowledge by hovering over every operation and getting the related hint that consists of arguments, formula and a short explanation of the operation.

Argument Selection After selecting the operation the list of valid arguments are presented to the annotators to choose from. Valid arguments consist of numbers in the problem, constants in the problem category, and the previous calculations. The annotators are restricted to select only from these valid arguments to prevent having noisy and dangling numbers. After submission of an operation and the corresponding arguments, the result of the operation is automatically calculated and will be added as a new valid argument to the argument list.

Program Submission To prevent annotators from submitting arbitrary programs, we enforce restrictions to the final submission. Our platform only accepts programs which include some numbers from the problem, and whose final calculation is very close to the correct numeric solution.
High Quality Crowd Workers
We dynamically evaluate and employ highquality annotators through a collection of qualitycontrol questions. We take advantage of the annotation platform in Figure Eight.^{5}^{5}5https://www.figureeight.com The annotators are randomly evaluated through a predefined set of test questions, and they have to maintain an accuracy threshold to be able to continue their annotations. If an annotator’s accuracy drops below a threshold, their previous annotations are labeled as untrusted and will be added to the pool of annotations again.
Alignment Validation
To further evaluate the quality of the annotated programs, we leverage a validation strategy to check whether the problems and annotated programs are aligned or not. According to this strategy, at least 2 out of 3 validators should rank the operation program as valid for it to be selected. The validation accuracy is across categories.
5 Models
We develop encoderdecoder neural models to map word problems to a set of feasible operation programs. We match the result of the executed operation program against the list of multiplechoice options given for a particular problem. The matching solution is the final model output.
We frame the problem of aligning an operation program with a math word problem as a neural machine translation (NMT) task, where the word problem and gold operation program form a parallel text pair. The vocabulary of includes all possible operations and arguments in our representation language.
5.1 SequencetoProgram
For our initial sequencetoprogram model, we follow the attentionbased NMT paradigm of Bahdanau et al. (2015); Cho et al. (2014). We encode the source word problem text using a bidirectional RNN encoder . The decoder predicts a distribution over the vocabulary and input tokens to generate each operation or argument in the target operation program. For our sequencetoprogram model vocabulary, we use informed generation, in which the program tokens are generated separately from the vocabulary of operations or arguments .
The encoded text is represented by a sequence of dimensional hidden states , where is the length of the input text. A context vector is computed by taking the weighted sum of the attention model weights for each timestep and each encoder hidden state :
.
We compute the dimensional decoder hidden state using a LSTM recurrent layer:
(3) 
At each timestep, we make a prediction for an operator or argument , where corresponds to the index of the argument in operator ’s argument list. This prediction is conditioned on the previous tokens and the input to decode an entire operation program of length :
(4)  
(5) 
Here is a 1layer feedforward neural network and is the softmax function. During training time, we minimize the negative loglikelihood (NLL) using the following objective:
(6) 
At test time, we only observe the input text when predicting operation programs:
(7) 
5.2 Categorized SequencetoProgram Model
We extend our base sequencetoprogram model to integrate knowledge of math word problem domain categories. We modify the RNN decoder layers that compute the decoder hidden state to be categoryaware. Here, the category label is deterministically computed by the category extractor (explained below). It functions as a hard decision switch that determines which set of parameters to use for the hidden state computation:
(8) 
The updated objective function from equation (7) is shown below:
(9) 
The full model architecture is shown in Figure 4.
DomainSpecific Category Extraction
We first construct a lexicon of ngrams relating to a specific domain. The lexicon is a list consisting of domainspecific categories and associated ngrams. For each domain category in the lexicon, we select associated ngrams that occur frequently in word problems belonging to domain category , but rarely appear in other domain categories. We compute ngram frequency as the number of ngrams associated with a category appearing in the text of a word problem . We obtain a list of potential categories for by choosing all categories for which , and then assign a category label to based on which category has the highest ngram frequency.
5.3 Solving Operation Programs
Once a complete operation program has been decoded, each operator in the program is executed sequentially along with its predicted set of arguments to obtain a possible solution. For each word problem and options , we generate a beam of the top decoded operation programs. We execute each decoded program to find the solution from the list of options o of the problem. We first choose options that are within a threshold of the executed value of . We select as the predicted solution by checking the number of selected options and the minimum distance between the executed value of and a possible option for . For the problems in AQuA that do not belong in any category of MathQA, we randomly choose an option.
6 Experimental Setup
6.1 Datasets
Our dataset consists of problems which are randomly split in training/dev/test problems. Our dataset significantly enhances the AQuA dataset by fully annotating a portion of solvable problems in the AQuA dataset into formal operation programs.
We carefully study the AQuA dataset. Many of the problems are nearduplicates with slight changes to the math word problem stories or numerical values since they are expanded from a set of 30,000 seed problems through crowdsourcing Ling et al. (2017). These changes are not always reflected in the rationales, leading to incorrect solutions. There are also some problems that are not solvable given current math word problem solving frameworks because they require a level of reasoning not yet modeled by neural networks. Sequence problems, for example, require understanding of patterns that are difficult to intuit without domain knowledge like sequence formulas, and can only be solved automatically through bruteforce or guessing. Table 2 shows a full breakdown of the AQuA dataset by solvability.^{6}^{6}6There is overlap between unsolvable subsets. For example, a sequence problem may also be a duplicate of another problem in the AQuA dataset.
Subset  Train  Valid 

Unsolvable  No Words  37  0 
Unsolvable  Sequence  1,991  4 
Unsolvable  Requires Options  6,643  8 
Unsolvable  Nonnumeric  10,227  14 
Duplicates  17,294  0 
Solvable  65,991  229 
Total  97,467  254 
6.2 Annotation Details
We follow the annotation strategy described in Section 4 to formally annotate problems with operation programs. ^{7}^{7}7We tried two other strategies of showing extra information (rationales or end solutions) to annotators to facilitate solving problems. However, our manual validation showed that annotators mostly used those extra information to artificially build an operation program without reading the problem.
Annotator Agreements and Evaluations
Our expert evaluation of the annotation procedure for a collection of 500 problems shows that 92% of the annotations are valid. Additionally, it has agreement between the expert validation and the crowd sourcing validation task.
Annotation Expansion
The AQuA dataset consists of a group of problems which share similar characteristics. These problems can be solved with similar operation programs. We find closely similar problems, replace numeric values with generic numbers, and expand annotations to cover more problems from the AQuA dataset. For similarity, we use Levenshtein distance with a threshold of 4 words in edit distance.
6.3 Model and Training Details
We use the official python implementation of OpenNMT Klein et al. (). We choose a LSTMbased encoderdecoder architecture. We use Adam optimizer Kingma and Ba (2015), and the learning rate for training is . The hidden size for the encoder and decoder is set to . Both the encoder and decoder have layers. The word embedding vectors are randomly initialized. At inference time, we implemented a beam search with beam size of 200 for AQuA and 100 for MathQA.
The program vocabulary consists of the operations in our representation language and valid arguments . For valid arguments, we do not use their actual values since the space is very large. Instead, we keep a list of numbers according to their source. Constants are predefined numbers that are available to all problems. Problem numbers are added to the list according to their order in the problem text. Calculated numbers in the intermediate steps are added to the list according to the operation order.
7 Experimental Results
7.1 Results
Table 3 compares the performance of our sequencetoprogram models trained on MathQA with baselines on MathQA and AQuA test sets. The base model is referred to as “Seq2prog,” while our model with categorization is “Seq2prog + cat.” For accuracy, the performance was measured in terms of how well the model would perform on an actual math test.
We observe improvement for our “Seq2prog + cat” model despite the fact that our training data is proportionally smaller than the AQuA dataset, and our model is much simpler than the stateoftheart model on this dataset. This indicates the effectiveness of our formal representation language to incorporate domain knowledge as well as the quality of the annotations in our dataset.
Model  MathQA  AQuA 

Random  20.0  20.0 
AQuA Model    36.4 
Seq2prog  51.9  33.0 
Seq2prog + cat  54.2  37.9 
7.2 Analysis
Qualitative Analysis
Table 5 and Figure 5 show some examples of problems solved by our method. We analyzed 50 problems that are solved wrongly by our system on the MathQA dataset. Table 4 summarizes four major categories of errors.
The most common type of errors are problems that need complicated or long chain of mathematical reasoning. For example, the first problem in Table 4 requires reasoning that goes beyond one sentence. Other errors are due to limitations in our representation language. For example, the second problem in Table 4 requires the factorization operation which is not defined in our representation language. Future work can investigate more domains of mathematics such as logic, number factors, etc. Some errors are due to the slightly noisy nature of our categorization strategy. For example, the third problem in Table 4 is mistakenly categorized as belonging to physics domain due to the presence of words m, cm, liter in the problem text, while the correct category for the problem is geometry. The final category of errors are due to problems that do not have enough textual context or erroneous problems (e.g., fourth problem in Table 4).
Error type  Problem 

Hard problems ()  Jane and Ashley take 8 days and 40 days respectively to complete a project when they work on it alone. They thought if they worked on the project together, they would take fewer days to complete it. During the period that they were working together, Jane took an eight day leave from work. This led to Jane’ s working for four extra days on her own to complete the project. How long did it take to finish the project? 
Limitation in representation language ()  How many different positive integers are factors of 25? 
Categorization errors ()  A cistern of capacity 8000 litres measures externally 3.3 m by 2.6 m by 1.3 m and its walls are 5 cm thick. The thickness of the bottom is: 
Incorrect or insufficient problem text) ()  45 x ? = 25 of 900 
Problem : A rectangular field is to be fenced on three sides leaving a side of 20 feet uncovered. if the area of the field is 10 sq. feet, how many feet of fencing will be required? 
Operations : divide(10,20), multiply(, const_2), add(20, #1) 
Problem : How long does a train 110m long running at the speed of 72 km/hr takes to cross a bridge 132m length? 
Operations : add(110, 132), multiply(72, const_0.2778), divide(, ), floor() 
Impact of Categorization
Table 3 indicates that our categoryaware model outperforms the base model on both AQuA and MathQA datasets. The gain is relatively small because the current model only uses categorization decisions as hard constraints at decoding time. Moreover, the problem categorization might be noisy due to our use of only one mathematical interpretation for each domainspecific ngram. For example, the presence of the words “square” or “cube” in the text of a math word problem indicate that the word problem is related to the geometry domain, but these unigrams can also refer to an exponential operation ( or ).
To measure the effectiveness of our categorization strategy, we used human annotation over 100 problems. The agreement between human annotators is and their agreement with our model is . As a future extension of this work, we would like to also consider the context in which domainspecific ngrams appear.
Discussions
As we mentioned in section 3, the continuous nature of our formalism allows us to solve problems requiring systems of equations. However, there are other types of word problems that are currently unsolvable or have multiple interpretations leading to multiple correct solutions. While problems that can only be solved by bruteforce instead of logical reasoning and nonnarrative problems that do not fit the definition of a math word problem (in Table 2 these appear as “no word”) are removed from consideration, there are other problems that are beyond the scope of current models but could pose an interesting challenge for future work. One example is the domain of sequence problems. Unlike past word problemsolving models, our models incorporate domainspecific math knowledge, which is potentially extensible to common sequence and series formulas.
8 Conclusion
In this work, we introduced a representation language and annotation system for largescale math word problemsolving datasets that addresses unwanted noise in these datasets and lack of formal operationbased representations. We demonstrated the effectiveness of our representation language by transforming solvable AQuA word problems into operation formalisms. Experimental results show that both our base and categoryaware sequencetoprogram models outperform baselines and previous results on the AQuA dataset when trained on data aligned with our representation language. Our representation language provides an extra layer of supervision that can be used to reduce the influence of statistical bias in datasets like AQuA. Additionally, generated operation programs like the examples in figure 5 demonstrate the effectiveness of these operation formalisms for representing math word problems in a human interpretable form.
The gap between the performance of our models and human performance indicates that our MathQA still maintains the challenging nature of AQuA problems. In future work, we plan to extend our representation language and models to cover currently unsolvable problems, including sequence and highorder polynomial problems.
Acknowledgements
This research was supported by ONR (N000141812826), NSF (IIS 1616112), Allen Distinguished Investigator Award, and gifts from Google, Allen Institute for AI, Amazon, and Bloomberg. We thank Marti A. Hearst, Katie Stasaski, and the anonymous reviewers for their helpful comments.
References
 Bahdanau et al. (2015) D. Bahdanau, K. Cho, and Y. Bengio. 2015. Machine translation by jointly learning to align and translate. In ICLR.
 Bakman (2007) Yefim Bakman. 2007. Robust understanding of word problems with extraneous information. In arXiv preprint math/0701393.
 Bobrow (1964) Daniel G Bobrow. 1964. Natural language input for a computer problem solving system.
 Briars and Larkin (1984) Diane J Briars and Jill H Larkin. 1984. An integrated model of skill in solving elementary word problems. In Cognition and instruction 1(3), pages 245–296.
 Charniak (1968) Eugene Charniak. 1968. Calculus word problems. Ph.D. thesis, Massachusetts Institute of Technology.
 Charniak (1969) Eugene Charniak. 1969. Computer solution of calculus word problems. In Proceedings of the 1st international joint conference on Artificial intelligence, pages 303–316. Morgan Kaufmann Publishers Inc.
 Cheng et al. (2017) Jianpeng Cheng, Siva Reddy, Vijay Saraswat, and Mirella Lapata. 2017. Learning structured natural language representations for semantic parsing. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 44–55. Association for Computational Linguistics.
 Cho et al. (2014) Kyunghyun Cho, Bart van Merrinboer, Dzmitry Bahdanau, and Yoshua Bengio. 2014. On the properties of neural machine translation: Encoderâdecoder approaches. In Proceedings of SSST8, Eighth Workshop on Syntax, Semantics and Structure in Statistical Translation, pages 103–111.
 Dellarosa (1986) Denise Dellarosa. 1986. A computer simulation of childrens arithmetic wordproblem solving. In Behavior Research Methods, Instruments, Computers 18(2), pages 147–154.
 Hosseini et al. (2017) Mohammad Javad Hosseini, Hannaneh Hajishirzi, Oren Etzioni, and Nate Kushman. 2017. Learning to solve arithmetic word problems with verb categorization. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP).
 Huang et al. (2018a) Danqing Huang, Jing Liu, ChinYew Lin, and Jian Yin. 2018a. Neural math word problem solver with reinforcement learning. In Proceedings of the 27th International Conference on Computational Linguistics, pages 213–223.
 Huang et al. (2016) Danqing Huang, Shuming Shi, ChinYew Lin, Jian Yin, and WeiYing Ma. 2016. How well do computers solve math word problems? largescale dataset construction and evaluation. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL).
 Huang et al. (2018b) Danqing Huang, JinGe Yao, ChinYew Lin, Qingyu Zhou, and Jian Yin. 2018b. Using intermediate representations to solve math word problems. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics.
 Kingma and Ba (2015) Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In the 3rd International Conference for Learning Representations (ICLR).
 (15) G. Klein, Y. Kim, Y. Deng, J. Senellart, and A. M. Rush. Opennmt: Opensource toolkit for neural machine translation. ArXiv eprints.
 KoncelKedziorski et al. (2016) Rik KoncelKedziorski, Subhro Roy, Aida Amini, Nate Kushman, and Hannaneh Hajishirzi. 2016. Mawps: A math word problem repository. In NAACL.
 Kushman et al. (2014) Nate Kushman, Yoav Artzi, Luke Zettlemoyer, and Regina Barzilay. 2014. Learning to automatically solve algebra word problems. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (ACL).
 Liang et al. (2018) ChaoChun Liang, YuShiang Wong, YiChung Lin, and KehYih Su. 2018. A meaningbased statistical english math word problem solver. In Proceedings of NAACLHLT 2018.
 Liguda and Pfeiffer (2012) Christian Liguda and Thies Pfeiffer. 2012. Modeling math word problems with augmented semantic networks. In International Conference on Application of Natural Language to Information Systems, pages 247–252. Springer.
 Ling et al. (2017) Wang Ling, Dani Yogatama, Chris Dyer, and Phil Blunsom. 2017. Program induction by rationale generation: Learning to solve and explain algebraic word problems. In Proceedings of ACL.
 Locascio et al. (2016) Nicholas Locascio, Karthik Narasimhan, Eduardo DeLeon, Nate Kushman, and Regina Barzilay. 2016. Neural generation of regular expressions from natural language with minimal domain knowledge. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 247–252. Association for Computational Linguistics.
 Roy and Roth (2018) Subhro Roy and Dan Roth. 2018. Mapping to declarative knowledge for word problem solving. In Proceedings of NAACLHLT 2018.
 Upadhyay and Chang (2017) Shyam Upadhyay and MingWei Chang. 2017. Annotating derivations: A new evaluation strategy and dataset for algebra word problems. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics.
 Wang et al. (2017) Yan Wang, Xiaojiang Liu, and Shuming Shi. 2017. Deep neural solver for math word problems. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing (EMNLP).
 Zelle and Mooney (1996) John M. Zelle and Raymond J. Mooney. 1996. Learning to parse database queries using inductive logic programming. In AAAI/IAAI, pages 1050–1055. AAAI Press/MIT Press.
 Zettlemoyer and Collins (2005) Luke Zettlemoyer and Michael Collins. 2005. Learning to map sentences to logical form: Structured classification with probabilistic categorial grammars. In UAI â05, Proceedings of the 21st Conference in Uncertainty in Artificial Intelligence, pages 658–666.
 Zettlemoyer and Collins (2007) Luke Zettlemoyer and Michael Collins. 2007. Online learning of relaxed ccg grammars for parsing to logical form. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLPCoNLL), pages 678–687. Association for Computational Linguistics.