Predicting Auction Price of Vehicle License Plate with Deep Recurrent Neural Network

Predicting Auction Price of Vehicle License Plate with
Deep Recurrent Neural Network

Vinci Chow vincichow@cuhk.edu.hk Department of Economics, The Chinese University of Hong Kong, Shatin, Hong Kong
Abstract

In Chinese societies, superstition is of paramount importance, and vehicle license plates with desirable numbers can fetch very high prices in auctions. Unlike other valuable items, license plates are not allocated an estimated price before auction. I propose that the task of predicting plate prices can be viewed as a natural language processing (NLP) task, as the value depends on the meaning of each individual character on the plate and its semantics. I construct a deep recurrent neural network (RNN) to predict the prices of vehicle license plates in Hong Kong, based on the characters on a plate. I demonstrate the importance of having a deep network and of retraining. Evaluated on 13 years of historical auction prices, the deep RNN’s predictions can explain over 80 percent of price variations, outperforming previous models by a significant margin. I also demonstrate how the model can be extended to become a search engine for plates and to provide estimates of the expected price distribution.

keywords:
price predictions, expert system, recurrent neural networks, deep learning, natural language processing
journal: Expert Systems with Applications

1 Introduction

Chinese societies place great importance on numerological superstition. Numbers such as 8 (representing prosperity) and 9 (longevity) are often used solely because of the desirable qualities they represent. For example, the Beijing Olympic opening ceremony occurred on 2008/8/8 at 8 p.m., the Bank of China (Hong Kong) opened on 1988/8/8, and the Hong Kong dollar is linked to the U.S. dollar at a rate of around 7.8.

License plates represent a very public display of numbers that people can own, and can therefore unsurprisingly fetch an enormous amount of money. Governments have not overlooked this, and plates of value are often auctioned off to generate public revenue. Unlike the auctioning of other valuable items, however, license plates generally do not come with a price estimate, which has been shown to be a significant factor affecting the sale price (Ashenfelter, 1989; Milgrom and Weber, 1982). The large number of character combinations and of plates per auction makes it difficult to provide reasonable estimates.

This study proposes that the task of predicting a license plate’s price based on its characters can be viewed as a natural language processing (NLP) task. Whereas in the West numbers can be desirable (such as 7) or undesirable (such as 13) in their own right for various reasons, in Chinese societies numbers derive their superstitious value from the characters they rhyme with. As the Chinese language is logosyllabic and analytic, combinations of numbers can stand for sound-alike phrases. Combinations of numbers that rhyme with phrases that have positive connotations are thus desirable. For example, “168,” which rhythms with “all the way to prosperity” in Chinese, is the URL of a major Chinese business portal (http://www.168.com). Looking at the historical data analyzed in this study, license plates with the number 168 fetched an average price of US$10,094 and as much as $113,462 in one instance. Combinations of numbers that rhyme with phrases possessing negative connotations are equally undesirable. Plates with the number 888 are generally highly sought after, selling for an average of $4,105 in the data, but adding a 5 (rhymes with “no”) in front drastically lowers the average to $342 (HKSAR Transport Department, 2010).

As these examples demonstrate, the value of a certain combination of characters depends on both the meaning of each individual character and the broader semantics. The task at hand is thus closely related to sentiment analysis and machine translation, both of which have advanced significantly in recent years.

Using a deep recurrent neural network (RNN), I demonstrate that a good estimate of a license plate’s price can be obtained. The predictions from this study’s deep RNN were significantly more accurate than previous attempts to model license plate prices, and are able to explain over 80 percent of price variations. There are two immediate applications of the findings in this paper: first, an accurate prediction model facilitates arbitrage, allowing one to detect underpriced plates that can potentially fetch for a higher price in the active second-hand market. Second, the feature vectors extracted from the last recurrent layer of the model can be used to construct a search engine for similar plates, which can provide highly-informative justification for the predicted price of any given plate.

In a more general sense, this study makes the following two contributions: first, it demonstrates the value of deep networks and NLP in making accurate price predictions, which is of practical importance in many industries. While the use of deep networks for NLP purpose is common in many areas, the application in price prediction is still in its infancy. License plate auction provides an ideal testing ground because it circumvents two major problems faced by similar applications, such as stock price or product price prediction: the value of a license plate depends directly on the characters on the plate, so the text data is not a proxy for other underlying factors. There is also no incentive problem, where strategic interactions between a buyer and the originator of the text data could result in the text data having ambiguous effect on price (Morgan and Stocken, 2003).

Second, it highlights the impact of splitting the data randomly versus sequentially. On one hand, for all the models trained in this study, performance was higher when data was split randomly due to the training data being more representative. On the other hand, the two ways of splitting data have limited impact on the best performing set of hyperparameters. In particular, the optimal number of layers and the number of neurons per layer for the recurrent neural network remained the same. The main difference was the optimal embedding dimension, which needed to be larger when there was more variation in the training data. The finding thus suggests that the sets of hyperparameters that work best under the two ways of splitting the data seem to differ in predictable ways.

The paper is organized as follows: Section 2 describes Hong Kong license plate auctions, followed by a review of related studies in Section 3. Section 4 details the model, which is tested in Section 5. Section 6 explores the possibility of using the feature vector from the last recurrent layer to construct a search engine for similar license plates. Section 7 concludes the paper.

2 License Plate Auctions in Hong Kong

License plates have been sold through government auctions in Hong Kong since 1973, and restrictions are placed on the reselling of plates. Between 1997 and 2009, 3,812 plates were auctioned per year, on average.

Traditional plates, which were the only type available before September 2006, consist of either a two-letter prefix or no prefix, followed by up to four digits (e.g., AB 1, LZ 3360, or 168). Traditional plates can be divided into the mutually exclusive categories of special plates and ordinary plates. Special plates are defined by a set of legal rules and include the most desirable plates.111A detailed description of the rules is available on the government’s official auction website. Ordinary plates are issued by the government when a new vehicle is registered. If the vehicle owner does not want the assigned plate, he or she can return the plate and bid for another in an auction. The owner can also reserve any unassigned plate for auction. Only ordinary plates can be resold.

In addition to traditional plates, personalized plates allow vehicle owners to propose the string of characters used. These plates must then be purchased from auctions. The data used in this study do not include this type of plate.

Auctions are open to the public and held on weekends twice a month by the Transport Department. The number of plates to be auctioned ranged from 90 per day in the early years to 280 per day in later years, and the list of plates available is announced to the public well in advance. The English oral ascending auction format is used, with payment settled on the spot, either by debit card or check (HKSAR Transport Department, 2019).

3 Related Studies

Most relevant to the current study is the limited literature on the modeling price of license plates, which uses hedonic regressions with a larger number of handcrafted features (Woo and Kwok, 1994; Woo et al., 2008; Ng et al., 2010).222 One might wonder why there has been no new study published since 2010. I believe the reason behind that is researchers interested in this topic are unaware of the availability of new techniques. Research on license plate pricing happens primarily in the field of economics. In this field, the main statistical technique used has always been linear regression. While there is constant innovation within the field, the focus is mainly on better identification, with very little effort going into improving prediction accuracy. Most economists are also not trained in the non-regression techniques commonly used in machine learning or statistical learning. These highly ad-hoc models rely on handcrafted features, so they adapt poorly to new data, particularly if they include combinations of characters not previously seen. In contrast, the deep RNN considered in this study learns the value of each combination of characters from its auction price, without the involvement of any handcrafted features.

The literature on using neural networks to make price predictions is very extensive and covers areas such as stock prices (Baba and Kozaki, 1992; Olson and Mossman, 2003; Guresen et al., 2011; de Oliveira et al., 2013) , commodity prices (Kohzadi et al., 1996; Kristjanpoller and Minutolo, 2015, 2016) , real estate prices (Do and Grudnitski, 1992; Evans et al., 1992; Worzola et al., 1995) , electricity prices (Weron, 2014; Dudek, 2016), movie revenues (Sharda and Delen, 2006; Yu et al., 2008; Zhang et al., 2009; Ghiassi et al., 2015) , automobile prices (Iseri and Karlik, 2009) and food prices (Haofei et al., 2007). Most studies focus on numeric data and use small, shallow networks, typically using a single hidden layer of fewer than 20 neurons. The focus of this study is very different: predicting prices from combinations of alphanumeric characters. Due to the complexity of this task, the networks used are much larger (up to 1,024 hidden units per layer) and deeper (up to 9 layers).

The approach is closely related to sentiment analysis(Maas et al., 2011; Socher et al., 2013), in which the focus is mainly on discrete measures of sentiment, but price can be seen as a continuous measure of buyer sentiment. A particularly relevant line of research is the use of Twitter feeds to predict stock price movements (Bollen et al., 2011; Bing et al., 2014; Pagolu et al., 2016), although the current study has significant differences. A single model is used in this study to generate predictions from character combinations, rather than treating sentiment analysis and price prediction as two distinct tasks, and the actual price level is predicted rather than just the direction of price movement. This end-to-end approach is feasible because the causal relationship between sentiment and price is much stronger for license plates than for stocks.

Finally, (Akita et al., 2016) utilizes a Long-Short-Term Memory (LSTM) network to study the collective price movements of 10 Japanese stocks. The neural network in that study was solely used as a time-series model, taking in vectorized textual information from two simplier, non-neural-network-based models. In contrast, this study utilizies a neural network directly on textual information.

Deep RNNs have been shown to perform very well in tasks that involve sequential data, such as machine translation (Cho et al., 2014; Sutskever et al., 2014; Zaremba et al., 2014; Amodei et al., 2016) and classification based on text description (Ha et al., 2016) , and are therefore used in this study. Predicting the price of a license plate is relatively simple: the model only needs to predict a single value based on a string of up to six characters. This simplicity makes training feasible on the relatively small volume of license plate auction data used in the study, compared with datasets more commonly used in training deep RNN.

4 Modeling License Plate Price with a Deep Recurrent Neural Network

Section 4.1 provides an overview of how a batch-normalized bidirectional recurrent neural network works. Readers who are familiar with the model may wish to skip directly to the implementation details in Section 4.2.

4.1 Overview

A neural network is made of neurons. Each neuron can be seen as a regression, generating a single output from a vector of inputs. The earliest networks often use the logistic regression, while more recent networks usually apply one of several common non-linear transformations on a linear regression instead. The non-linear transformation used in this paper is the rectified-linear unit, where .

The simplest network has only one hidden layer. In the hidden layer there are multiple neurons, each take as input the data and output a single value. Neurons differ in their linear regression weights—depending on discipline, these are often called coefficients or parameters instead. These weights are randomly initialized and adjusted during the training process by back propagating the prediction error. A final neuron takes all the output from the hidden layer and output one number, which in our case is the predicted price. This final neuron? s weights are also randomly initialized and trained. In this paper, the matrix represents the weights for all neurons in layer .

A deep network has more than one hidden layer. In this case, each neuron in a hidden layer takes all output from the previous hidden layer as input. Such type of layer is referred to as a fully connected layer. In this paper, the output from layer is denoted , which is a vector with as many elements as the number of neurons in the layer.

When using a neural network with text data, each character is fed into the network sequentially. The order in which characters are fed in is called time steps. Since neural networks requires numeric inputs, the character in time step is represented by a vector . depends only on the character and not the time step, with its values learnt from training.

A recurrent layer differs from a fully connected layer in that each neuron takes as input not just the output from the previous layer, but also the output from the same layer in the previous time step. A neuron in a bi-directional recurrent layer additionally takes in the output from the same layer in the next time step (Schuster and Paliwal, 1997). The bidirectionality allows the network to access hidden states from both the previous and next time steps, improving its ability to understand each character in context. In this paper, the output from the previous time step for layer is denoted , while the output from the next time step is denoted . The weights for these inputs are and respectively.

Neural networks are usually trained with graphic processing units (GPU), which can process multiple samples simultaneously. To efficiently utilize a GPU, data is broken into equal-sized portions called minibatches. Weights are updated every time a minibatch is processed.

Batch normalization standardizes a layer? s output by its mean and variance for every minibatch. The output is additionally scaled and shifted by and respectively. and are both learnt from training. This normalization process has been shown to speed up convergence (Laurent et al., 2016).

4.2 Implementation Details

The input from each sample is an array of characters (e.g., [“X,” “Y,” “1,” “2,” “8”]), padded to the same length with a special character. Each character is converted by a lookup table to a vector representation , known as character embedding:

(1)

The dimension of the character embedding, , is a hyperparameter. The values are initialized with random values and learned through training. The embedding is fed into the neural network sequentially, denoted by the time step .

The neural network consists of multiple bidirectional recurrent layers, followed by one or more fully connected layers. Batch normalization is applied throughout. Each recurrent layer is thus implemented as follows:

(2)
(3)
(4)
(5)

where is the rectified-linear unit, is the vector of activations from the previous layer at the same time step , represents the activations from the current layer at the previous time step , and represents the activations from the current layer at the next time step . is the BatchNorm transformation, and is the within-mini-batch-standardized version of .333 Specifically, , where and are the mean and variance of within each mini-batch. is a small positive constant that is added to improve numerical stability, set to 0.0001 for all layers. , , and are weights learnt by the network through training.

The fully connected layers are implemented as , except for the last layer, which has linear activation: . is a bias vector learnt from training. The outputs from all time steps in the final recurrent layer are added together before being fed into the first fully connected layer. To prevent overfitting, dropout is applied after every layer except the last (Hinton et al., 2012). The final scalar output, , is the predicted price.

The model’s hyperparameters include the dimension of character embeddings, number of recurrent layers, number of fully connected layers, number of hidden units in each layer, and dropout rate. These parameters must be selected ahead of training.

5 Experiment

5.1 Data

Figure 1: Sample Model Setup
Figure 2: Distribution of Plate Prices

The data used are the Hong Kong license plate auction results from January 1997 to July 2010, obtained from the HKSAR government (HKSAR Transport Department, 2010).444Although the data is not available online, it can be obtained by contacting the HKSAR Transport Department. The data contain 52,926 auction entries, each consisting of i. the characters on the plate, ii. the sale price (or a specific symbol if the plate was unsold), and iii. the auction date. Ordinary plates start at a reserve price of at least HK$1,000 ($128.2) and at least $5,000 ($644.4) for special plates. The reserve prices mean that not every plate is sold, and 5.1 percent of the plates in the data were unsold. As these plates did not possess a price, we followed previous studies in dropping them from the dataset, leaving 50,698 entries available for the experiment.

Figure 2 plots the distribution of prices within the data. The figure shows that the prices are highly skewed: while the median sale price is $641, the mean sale price is $2,073. The most expensive plate in the data is “12,” which was sold for $910,256 in February 2005. To compensate for this skewness, log prices were used in training and inference.

The finalized data were divided into three parts, in two different ways: the first way divided the data randomly, while the second divided the data sequentially into non-overlapping parts. In both cases, training was conducted with 64 percent of the data, validation was conducted with 16 percent, and the remaining 20 percent served as the test set. The first way represents an ideal scenario, where different types of plates are equally represented in each set of data. To further ensure that each set of data contained plates of different prices, the data was first divided into 500 bins according to price, with the train-validation-test split conducted within each bin. The second way creates a more realistic scenario, as it represents what a model in practical deployment would face. It is also a significantly more difficult scenario: because the government releases plates alphabetically through time, plates that start with later alphabets would not be available in sequentially-split data. For example, plates that start with “M” were not available before 2005, and plates that start with “P” would not until 2010. It is therefore very difficult for a model trained on sequentially-split data to learn the values of plates starting with later alphabets.

5.2 Training

I conducted a grid search to investigate the properties of different combinations of hyperparameters, varying the dimension of character embeddings (12 to 256), the number of recurrent layers (1 to 9), the number of fully connected layers (1 to 3), the number of hidden units in each layer (64 to 2048) and the dropout rate (0 to .1). A total of 1080 sets of hyperparameters were investigated.

The grid search was conducted in three passes: In the first pass, a network was trained for 40 epochs under each set of hyperparameters, repeated 4 times. In the second pass, training was repeated 10 times for each of the 10 best sets of hyperparameters from the first pass, based on median validation root mean-squared error (RMSE), a goodness-of-fit measure commonly used for continuous target such as price:

(6)

where is the actual price of license plate .

In the final pass, training was repeated for 30 times under the best set of hyperparameters from the second pass, again based on median validation RMSE. Training duration in the second and the third passes was 120 epochs.

During each training session, a network was trained under mean-squared error with different initializations. An Adam optimizer with a learning rate of 0.001 was used throughout (Kingma and Ba, 2014). After training was completed, the best state based on the validation error was reloaded for inference.

Training was conducted with four of NVIDIA GTX 1080s. To fully use the GPUs, a large mini-batch size of 2,048 was used.555I also experimented with smaller batch sizes of 64 and 512. By keeping the training time constant, the smaller batch size resulted in worse performance, due to the reduction in epochs. During the first pass, the median training time on a single GPU ranged from 8 seconds for a 2-layer, 64-hidden-unit network with an embedding dimension of 12, to 1 minute 57 seconds for an 8-layer, 1,024-hidden-unit network with an embedding dimension of 24, and to 7 minutes 50 seconds for a 12-layer 2,048-hidden-unit network with an embedding dimension of 256.

Finally, I also trained recreations of models from previous studies as well as a series of fully-connected networks and character -gram models for comparison. Given that the maximum length of a plate is six characters, for the -gram models I focused on , and in each case calculated a predicted price based on the median and mean of closest neighbors from the training data, where .

5.3 Model Performance

Configuration Train RMSE Valid RMSE Test RMSE Train Valid Test
Random Split
RNN 512-128-5-2-.05 .4391 .5505 .5561 .8845 .8223 .8171
Woo et al. (2008) .7127 .7109 .7110 .6984 .7000 .6983
Ng et al. (2010) .7284 .7294 .7277 .6850 .6842 .6840
MLP 512-128-7-.05 .6240 .6083 .7467 .78235 .72785 .6457
unigram NN-10 .8945 1.004 .9997 .5221 .4086 .4088
(1-4)-gram NN-10 .9034 1.012 1.013 .5125 .3996 .3931
Sequential Split
RNN 512-48-5-2-.1 .5018 .5111 .6928 .8592 .8089 .6951
Woo et al. (2008) .7123 .6438 .8147 .7163 .6967 .5783
Ng et al. (2010) .7339 .6593 .8128 .6988 .6819 .5802
MLP 512-48-7-.1 .6326 .6074 .7475 .7762 .7300 .6450
unigram NN-10 .8543 1.046 1.094 .5239 .3979 .3846
(1-4)-gram NN-10 .8936 1.086 1.144 .4791 .3503 .3269
Configuration of RNN is reported in the format of [Hidden Units]-[Embed. Dimension]-[Recurrent Layers]-[Fully Connected Layers]-[Dropout Rate]. Configuration of MLP is reported in the same format except there is no recurrent layer. Numbers for RNN, MLP and Ensemble models are the medians from 30 runs.
Table 1: Model Performance

Table 1 reports the summary statistics for the best set of parameters out of all sets specified in section 5.2, based on the median validation RMSE. Because separate models were trained for the randomly-split data and the sequentially-split version, two sets of statistics are reported. For each set of statistics—Random Split and Sequential Split—I report the performance of the best RNN model, followed by the performance of various other models for comparison. Performance figures for training data, validation data and test data are included to highlight out of sample performance. I report two measures of performance, RMSE and R-squared, because the latter is more commonly used in economics and finance. R-squared measures the fraction of target variation that the model is able to explain and is defined as:

(7)

where is the mean price of all license plates.

The best model was able to explain more than 80 percent of the variation in prices when the data was randomly split. As a comparison, Woo et al. (2008) and Ng et al. (2010), which represent recreations of the regression models in (Woo et al., 2008) and (Ng et al., 2010), respectively, were capable of explaining only 70 percent of the variation at most.666 To make the comparison meaningful, the recreations contained only features based on the characters on a plate.

The importance of having recurrent layers can be seen from the inferior performance of the fully-connected network (MLP) with the same embedded dimension, number of layers and neurons as the best RNN model. This model was only capable of explaining less than 66 percent of the variation in prices.

In the interest of space, I include only two best-performing -gram models based on median prices of neighbors. Both models were significantly inferior to RNN and hedonic regressions, being able to explain only 40 percent of the variation in prices. For unigram, the best validation performance was achieved when . For , models with unlimited features have very poor performance, as they generate a large number of features that rarely appear in the data. Restricting the number of features based on occurances and allowing a range of within a single model improve performance, but never surpassing the performance of the simple unigram. The performance of using median price and using mean price are very close, with a difference smaller than 0.05 in all cases.

All models took a significant performance hit when the data was split sequentially, with the RNN maintaining its performance lead over other models. The hit was expected—as explained previously, plates that start with later alphabets would not be available for training and validation because the government releases plates alphabetically through time. The impact was particularly severe for the test set, because it was drawn from a time period furthest away from that of the train set. The best RNN model in this case has the same number of layers and the same number of nuerons per layers as in the random split case, but the optimal size of the character embedding was significantly smaller. This was once again due to plates that start with later alphabets not being available for training and validation, so that these two sets had less variation when the data was split sequentially rather than randomly.

Figure 4 plots the relationship between predicted price and actual price from a representative run of the best model, grouped in bins of HK$1,000 ($128.2). The model performed well for a wide range of prices, with bins tightly clustered along the 45-degree line. It consistently underestimated the price of the most expensive plates, however, suggesting that the buyers of these plates had placed on them exceptional value that the model could not capture.

Figure 3: Actual vs Predicted Price
Figure 4: Performance Fluctuations

5.4 Model Stability

Unlike hedonic regressions, which give the same predictions and achieve the same performance in every run, a neural network is susceptible to fluctuations due to convergence to local maxima. Figure 4 plots the kernel density estimates of test RMSEs for the best models’ 30 training runs. The histogram represents the best model’s actual test RMSE distribution, while the red line is the kernel density estimate of the distribution. The errors are tightly clustered, with standard deviations of 0.025 for the randomly-split sample and 0.036 for the sequentially-split sample. This suggests that performance fluctuation is unlikely to be of concern in practice.

5.5 Retraining Over Time

Over time, a model could conceivably become obsolete if, for example, taste or the economic environment changed. In this section, I investigate the effect of periodically retraining the model with the sequentially-split data. Specifically, retraining was conducted throughout the test data yearly, monthly, or never. The best RNN-only model was used, with the sample size kept constant at 25,990 in each retraining, which is roughly five years of data. The process was repeated 30 times as before.

Figure 5: Impact of Retraining Frequency

Figure 5 plots the median RMSE and , evaluated monthly. For the RNN model with no retraining prediction, accuracy dropped rapidly by both measures. RMSE increases an average of 0.017 per month, while dropped 0.01 per month. Yearly retraining was significantly better, with a 8.6 percent lower RMSE and a 6.9 percent higher . The additional benefit of monthly retraining was, however, much smaller. Compared with the yearly retraining, there was only a 3.3 percent reduction in the RMSE and a 2.6 percent increase in the explanatory power. The differences were statistically significant.777 Wilcoxon Sign-Rank Tests:
RNN yearly retraining = RNN no retraining: ,
RNN monthly retraining = RNN yearly retraining: ,

6 Explaining Predictions by Constructing a Search Engine for Similar Plates

Compared to models such as regression and -gram it is relatively hard to understand the rationale behind a RNN model’s prediction, given the large number of parameters involved and the complexity of the their interaction. If the RNN model is to be deployed in the field, it would need to be able to explain its prediction in order to convince human users to adopt it in practice. One way to do so is to extract a feature vector for each plate by summing up the output of the last recurrent layer over time. This feature vector is of the same size as the number of neurons in the last layer and represents what the model “think” of the license plate in concern. The feature vectors for all plates can be fed into a standard -nearest-neighbor model, effectively creating a search engine for similar plates. The similar plates provided by this search engine can be viewed as the “rationale” for the model’s prediction.

To demonstrate this procedure, I use the best RNN model in Table 1 to generate feature vectors for all training samples. These samples are used to setup a -NN model. When the user submit a query, a price prediction is made with the RNN model, while a number of examples are provided by the -NN model as rationale.

Table 2 illustrate the outcome of this procedure with three examples. The model was asked to predict the price of three plates, ranging from low to high value. The predicted prices are listed in the Prediction section, while the Historical Examples section lists for each query the top three entries returned by the -NN model. Notice how the procedure focused on the numeric part for the low-value plate and the alphabetical part for the middle-value plate, reflecting the value of having identical digits and identical alphabets respectively. The procedure was also able to inform the user that a plate has been sold before. Finally, the examples provided for the high-value plate show why it is hard to obtain an accurate prediction for such plates, as the historical prices for similar plates are also highly variable.

Plate Price Plate Price Plate Price
Query and Predicted Price LZ3360 1000 MM293 5000 13 2182000
Historical Examples provided by -NN HC3360 1000 MM293 5000 178 195000
BG3360 3000 MM203 5000 138 1100000
HV3360 3000 MM923 9000 12 7100000
Table 2: Explaining Predictions with Automated Selection of Historical Examples

7 Concluding Remarks

This study demonstrates that a deep recurrent neural network can provide good estimates of license plate prices, with significantly higher accuracy than other models. The deep RNN is capable of learning the prices from the raw characters on the plates, while other models must rely on handcrafted features. With modern hardware, it takes only a few minutes to train the best-performing model described previously, so it is feasible to implement a system in which the model is constantly retrained for accuracy.

A natural next step along this line of research is the construction of a model for personalized plates. Personalized plates contain owner-submitted sequences of characters and so may have vastly more complex meanings. Exactly how the model should be designed—for example, whether there should be separate models for different types of plates, or whether pre-training on another text corpus could help—remains to be studied.

Acknowledgements

I would like to thank Melody Tang and Kenneth Chu for their excellent work assisting this project.

References

  • R. Akita, A. Yoshihara, T. Matsubara, and K. Uehara (2016) Deep learning for stock prediction using numerical and textual information. In 2016 IEEE/ACIS 15th International Conference on Computer and Information Science (ICIS), Vol. , pp. 1–6. External Links: Document, ISSN Cited by: §3.
  • D. Amodei, R. Anubhai, E. Battenberg, C. Case, J. Casper, B. Catanzaro, J. Chen, M. Chrzanowski, A. Coates, G. Diamos, E. Elsen, J. Engel, L. Fan, C. Fougner, T. Han, A. Y. Hannun, B. Jun, P. LeGresley, L. Lin, S. Narang, A. Y. Ng, S. Ozair, R. Prenger, J. Raiman, S. Satheesh, D. Seetapun, S. Sengupta, Y. Wang, Z. Wang, C. Wang, B. Xiao, D. Yogatama, J. Zhan, and Z. Zhu (2016) Deep speech 2: end-to-end speech recognition in english and mandarin. Proceedings of The 33rd International Conference on Machine Learning, pp. 173–182. Cited by: §3.
  • O. Ashenfelter (1989) How auctions work for wine and art. 3 (3), pp. 23–36. External Links: Document, Link Cited by: §1.
  • N. Baba and M. Kozaki (1992) An intelligent forecasting system of stock price using neural networks. In [Proceedings 1992] IJCNN International Joint Conference on Neural Networks, Vol. 1, pp. 371–377 vol.1. External Links: Document Cited by: §3.
  • L. Bing, K. C. C. Chan, and C. Ou (2014) Public sentiment analysis in twitter data for prediction of a company’s stock price movements. In 2014 IEEE 11th International Conference on e-Business Engineering, pp. 232–239. External Links: Document Cited by: §3.
  • J. Bollen, H. Mao, and X. Zeng (2011) Twitter mood predicts the stock market. Journal of Computational Science 2 (1), pp. 1 – 8. Note: External Links: ISSN 1877-7503, Document, Link Cited by: §3.
  • K. Cho, B. van Merriënboer, C. Gülçehre, D. Bahdanau, F. Bougares, H. Schwenk, and Y. Bengio (2014) Learning phrase representations using rnn encoder–decoder for statistical machine translation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), Doha, Qatar, pp. 1724–1734. External Links: Link Cited by: §3.
  • F. A. de Oliveira, C. N. Nobre, and L. E. Zarate (2013) Applying artificial neural networks to prediction of stock price and improvement of the directional prediction index - case study of PETR4, Petrobras, Brazil. 40 (18), pp. 7596 – 7606. Note: External Links: ISSN 0957-4174, Document, Link Cited by: §3.
  • Q. Do and G. Grudnitski (1992) A neural network approach to residential property appraisal. pp. 38–45. Cited by: §3.
  • G. Dudek (2016) Multilayer perceptron for GEFCom2014 probabilistic electricity price forecasting. 32 (3), pp. 1057 – 1060. Note: External Links: ISSN 0169-2070, Document, Link Cited by: §3.
  • A. Evans, H. James, and A. Collins (1992) Artificial neural networks: an application to residential valuation in the UK. 11, pp. 195–204. Cited by: §3.
  • M. Ghiassi, D. Lio, and B. Moon (2015) Pre-production forecasting of movie revenues with a dynamic artificial neural network. Expert Systems with ApplicationsExpert Systems with ApplicationsExpert Systems with ApplicationsExpert Systems with ApplicationsExpert Systems with ApplicationsExpert Systems with ApplicationsInternational Journal of ForecastingInternational Journal of ForecastingInternational Journal of ForecastingInternational Journal of ForecastingInternational Journal of ForecastingExpert Systems with ApplicationsExpert Systems with ApplicationsArXiv e-printsArXiv e-printsSSRNSSRNSSRNPLOS ONEMultimedia Tools and ApplicationsArXiv e-printsExpert Systems with ApplicationsNeurocomputingEnergy EconomicsJournal of Real Estate ResearchThe Real Estate AppraiserJournal of Property Valuation and InvestmentTechnical reportJournal of Economic PerspectivesEconometricaJournal of Economic PsychologyJournal of Economic PsychologyEconomics LettersCoRRJournal of Machine Learning ResearchJournal of Experimental Psychology: GeneralBiometricsBiometricsJournal of Machine Learning ResearchJournal of Machine Learning ResearchJ. Mach. Learn. Res.IEEE AccessArXiv e-printsCoRRCoRRNatureScience (New York, N.Y.)Journal of Economic LiteratureThe Quarterly Journal of EconomicsAmerican Economic ReviewThe Review of Economic StudiesThe Quarterly Journal of EconomicsThe Review of Economic StudiesThe RAND Journal of Economics 42 (6), pp. 3176 – 3193. Note: External Links: ISSN 0957-4174, Document, Link Cited by: §3.
  • E. Guresen, G. Kayakutlu, and T. U. Daim (2011) Using artificial neural network models in stock market index prediction. 38 (8), pp. 10389 – 10397. Note: External Links: ISSN 0957-4174, Document, Link Cited by: §3.
  • J. Ha, H. Pyo, and J. Kim (2016) Large-scale item categorization in e-commerce using multiple recurrent neural networks. In Proceedings of the 22Nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’16, New York, NY, USA, pp. 107–115. External Links: ISBN 978-1-4503-4232-2, Link, Document Cited by: §3.
  • Z. Haofei, X. Guoping, Y. Fangting, and Y. Han (2007) A neural network model based on the multi-stage optimization approach for short-term food price forecasting in china. 33 (2), pp. 347 – 356. Note: External Links: ISSN 0957-4174, Document, Link Cited by: §3.
  • G. E. Hinton, N. Srivastava, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov (2012) Improving neural networks by preventing co-adaptation of feature detectors. CoRR abs/1207.0580. External Links: Link Cited by: §4.2.
  • HKSAR Transport Department (Ed.) (2010) Result of auction of traditional vehicle registration marks. HKSAR Transport Department. Cited by: §1, §5.1.
  • HKSAR Transport Department (2019) External Links: Link Cited by: §2.
  • A. Iseri and B. Karlik (2009) An artificial neural networks approach on automobile pricing. 36 (2, Part 1), pp. 2155 – 2160. Note: External Links: ISSN 0957-4174, Document, Link Cited by: §3.
  • D. P. Kingma and J. Ba (2014) Adam: A method for stochastic optimization. CoRR abs/1412.6980. External Links: Link, 1412.6980 Cited by: §5.2.
  • N. Kohzadi, M. S. Boyd, B. Kermanshahi, and I. Kaastra (1996) A comparison of artificial neural network and time series models for forecasting commodity prices. 10 (2), pp. 169 – 181. Note: Financial Applications, Part I External Links: ISSN 0925-2312, Document, Link Cited by: §3.
  • W. Kristjanpoller and M. C. Minutolo (2015) Gold price volatility: a forecasting approach using the artificial neural network-garch model. 42 (20), pp. 7245 – 7251. Note: External Links: ISSN 0957-4174, Document, Link Cited by: §3.
  • W. Kristjanpoller and M. C. Minutolo (2016) Forecasting volatility of oil price using an artificial neural network-garch model. 65 (), pp. 233 – 241. Note: External Links: ISSN 0957-4174, Document, Link Cited by: §3.
  • C. Laurent, G. Pereyra, P. Brakel, Y. Zhang, and Y. Bengio (2016) Batch normalized recurrent neural networks. In 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 2657–2661. External Links: Document Cited by: §4.1.
  • A. L. Maas, R. E. Daly, P. T. Pham, D. Huang, A. Y. Ng, and C. Potts (2011) Learning word vectors for sentiment analysis. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, Vol. 1, pp. 142–150. Cited by: §3.
  • P. R. Milgrom and R. J. Weber (1982) A theory of auctions and competitive bidding. 50 (5), pp. 1089–1122. External Links: ISSN 00129682, 14680262, Link Cited by: §1.
  • J. Morgan and P. C. Stocken (2003) An analysis of stock recommendations. 34 (1), pp. 183–203. External Links: ISSN 07416261, Link Cited by: §1.
  • T. Ng, T. Chong, and X. Du (2010) The value of superstitions. 31 (3), pp. 293 – 309. Note: External Links: ISSN 0167-4870, Document, Link Cited by: §3, §5.3.
  • D. Olson and C. Mossman (2003) Neural network forecasts of canadian stock returns using accounting ratios. 19 (3), pp. 453 – 465. Note: External Links: ISSN 0169-2070, Document, Link Cited by: §3.
  • V. S. Pagolu, K. N. R. Challa, G. Panda, and B. Majhi (2016) Sentiment analysis of twitter data for predicting stock market movements. In 2016 International Conference on Signal Processing, Communication, Power and Embedded System, Cited by: §3.
  • M. Schuster and K. K. Paliwal (1997) Bidirectional recurrent neural networks. IEEE Transactions on Signal Processing 45 (11), pp. 2673–2681. External Links: Document, ISSN 1053-587X Cited by: §4.1.
  • R. Sharda and D. Delen (2006) Predicting box-office success of motion pictures with neural networks. Expert Systems with Applications 30 (2), pp. 243 – 254. Note: External Links: ISSN 0957-4174, Document, Link Cited by: §3.
  • R. Socher, A. Perelygin, J. Y. Wu, J. Chuang, C. D. Manning, A. Y. Ng, and C. Potts (2013) Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the conference on empirical methods in natural language processing (EMNLP), Vol. 1631. Cited by: §3.
  • I. Sutskever, O. Vinyals, and Q. V. Le (2014) Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Systems 27, Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger (Eds.), pp. 3104–3112. External Links: Link Cited by: §3.
  • R. Weron (2014) Electricity price forecasting: a review of the state-of-the-art with a look into the future. 30 (4), pp. 1030 – 1081. Note: External Links: ISSN 0169-2070, Document, Link Cited by: §3.
  • C. Woo, I. Horowitz, S. Luk, and A. Lai (2008) Willingness to pay and nuanced cultural cues: evidence from hong kong’s license-plate auction market. 29 (1), pp. 35 – 53. Note: External Links: ISSN 0167-4870, Document, Link Cited by: §3, §5.3.
  • C. Woo and R. H.F. Kwok (1994) Vanity, superstition and auction price. 44 (4), pp. 389 – 395. Note: External Links: ISSN 0165-1765, Document, Link Cited by: §3.
  • E. Worzola, M. Lenk, and A. Silva (1995) An exploration of neural networks and its application to real estate valuation. pp. 185–201. Cited by: §3.
  • L. Yu, S. Wang, and K. K. Lai (2008) Forecasting crude oil price with an emd-based neural network ensemble learning paradigm. 30 (5), pp. 2623 – 2635. Note: External Links: ISSN 0140-9883, Document, Link Cited by: §3.
  • W. Zaremba, I. Sutskever, and O. Vinyals (2014) Recurrent neural network regularization. CoRR abs/1409.2329. External Links: Link Cited by: §3.
  • L. Zhang, J. Luo, and S. Yang (2009) Forecasting box office revenue of movies with BP neural network. 36 (3, Part 2), pp. 6580 – 6587. Note: External Links: ISSN 0957-4174, Document, Link Cited by: §3.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
393364
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description