Visual Attention Model for Crosssectional Stock Return Prediction
and EndtoEnd Multimodal Market Representation Learning
Abstract
Technical and fundamental analysis are traditional tools used to analyze individual stocks; however, the finance literature has shown that the price movement of each individual stock correlates heavily with other stocks, especially those within the same sector. In this paper we propose a generalpurpose market representation that incorporates fundamental and technical indicators and relationships between individual stocks. We treat the daily stock market as a ‘market image’ where rows (grouped by market sector) represent individual stocks and columns represent indicators. We apply a convolutional neural network over this market image to build market features in a hierarchical way. We use a recurrent neural network, with an attention mechanism over the market feature maps, to model temporal dynamics in the market. We show that our proposed model outperforms strong baselines in both shortterm and longterm stock return prediction tasks. We also show another use for our market image: to construct concise and dense market embeddings suitable for downstream prediction tasks.
Visual Attention Model for Crosssectional Stock Return Prediction
and EndtoEnd Multimodal Market Representation Learning
Ran Zhao Carnegie Mellon University rzhao1@cs.cmu.edu Yuntian Deng Harvard University dengyuntian@seas.harvard.edu Mark Dredze Johns Hopkins University mdredze@cs.jhu.edu
Arun Verma Bloomberg averma3@bloomberg.net David Rosenberg Bloomberg drosenberg44@bloomberg.net Amanda Stent Bloomberg astent@bloomberg.net
Copyrights © All Rights Reserved by Bloomberg
Introduction
Over the past few years, there have been multiple proposals for methods to adopt machine learning techniques in quantitative finance research. The rapidly growing volume of market data allows researchers to upgrade trading algorithms from simple factorbased linear regression model to complex machine learning models such as reinforcement learning (?), knearest neighbor (?) and Gaussian processes (?).
Modeling stock price movement is very challenging since stock prices are affected by many external factors such as political events, market liquidity and economic strength. A variety of financial theories for market pricing have been proposed, which can serve as the theoretical foundation for designing tailored machine learning models. First, the efficient market hypothesis (?) states that all available information is reflected in market prices. Fluctuations in stock prices are a result of newly released information. Therefore, through analyzing individual stock price movements, a machine learningbased model should be able to decode the embedded market information.
Second, value investing theory (?) suggests to buy stocks below their intrinsic value to limit downside risk. The intrinsic value of a company is calculated by fundamental indicators which are revealed in quarterly and annual financial reports. A machine learningbased model should therefore be capable of discovering the relationship between different types of fundamental indicator and the intrinsic value of a company.
Third, the methodology of technical analysis introduced in (?) demonstrates some wellknown highly situated and contextdependent leading indicators of price movement such as relative strength index (RSI) and moving average convergence/divergence (MACD). Thus, a machine learningbased model should be able to estimate the predictive power of traditional technical indicators in different market situations.
Fourth, the stock market has a welldefined structure. In the macro, people have invented different financial indexes for major markets such as the NASDAQ100 and Dow Jones Industrial; these are composite variables that may indicate market dynamics. In the micro, the stock market is usually divided into 10 major sectors and tens of subsectors for key areas of the economy. Stocks in the same sector have a shared line of business and are expected to perform similarly in the long run (?). Traditional ways of dealing with market information are to include handcrafted microeconomic indicators in predictive models, or to construct covariance matrixes of returns among groups of stocks. However, those handcrafted features can become gradually lagged and unable to dynamically adjust to market changes. Therefore, a machine learningbased model should leverage information from the whole market as well as the sector of each included company.
Inspired by these financial theories, we implement an endtoend marketaware system that is capable of capturing market dynamics from multimodal information (fundamental indicators, technical indicators, and market structure) for stock return prediction. First, we construct a ‘market image’ as in Figure 1, in which each row represents one stock and each column represents an indicator from the three major categories shown in Table 1. Stocks are grouped in a fixed order by their sector and subsector (industry). Then we apply stateoftheart deep learning models from computer vision and natural language processing on top of the market image. Specifically, our contributions in this work are to: (1) leverage the power of attentionbased convolutional neural networks to model spatial relationships between stocks the in market dimension, and of recurrent neural networks for time series forecasting of stock returns in the temporal dimension, and (2) use a convolutional encoderdecoder architecture to reconstruct the market image for learning a generic and compact market representation. In the next sections, we present our market image, then our model for marketaware stock prediction, and finally our method for computing generic and compact market representations. We present empirical results showing that our model for marketaware stock prediction beats baselines used in finance, and that our market representation beats PCA.
Indicator Set  Time Scale  Indicators  
PriceVolume  Daily 



Daily 



Daily 



Quartly 

The Market Image
We represent the daily market as an image (a matrix). is the number of unique stocks while refers the number of extracted traditional trading indicators. In our experiments, we used the indicators from Table 1. A sample market image is depicted in Figure 1. The market image serves as a snapshot of market dynamics. These market images can be stacked to form a market cube as shown in Figure 2, thus incorporating a temporal dimension into the representation.
For our experiments later in this paper, we collected the 40 indicators from Table 1 for each of the S&P 500 index constituents on a daily basis from January 1999 to Dec 2016, and used this to construct daily market images. The size of the daily image is 500 (stocks) X 40 (multimodal indicators), denoted as . In each market image, stocks are grouped first by the ten sectors in the Global Industry Classification Standard (GICS), and within each sector, by the GICS subsectors.
For the market image, we find the min and max value of each indicator in the training set and apply a minmax scalar to normalize each feature into a 01 scale, see equation (1).
(1) 
Some fundamental indicators are updated quarterly; to fill those blank pixels in our market images, we applied a backward fill policy to the affected columns.
MarketAware Stock Return Prediction
Let us assume that we want to predict the return of at day based on information from the previous days. This means that we have to learn a market representation with respect to given the previous market images as the market context. First, we rotate and stack market images to construct a 3D market cube . Rows index the temporal dimension, columns index stocks, and channels index indicators, as shown in 4. Let refer to the dimensional vector indexed by in the temporal dimension and in the factor dimension of the market cube and refer to the dimensional vector indexed by in the temporal dimension and in the stock dimension. Separately, we initialize stock embeddings , where indexes the th column’s stock embedding.
Second, we use a convolutional neural network (CNN) to generate multiple feature maps of the market cube through multiple convolution operations. Each convolution operation involves a filter , which is applied to a window of 1 day to produce a new feature by:
(2) 
is a ReLU active function for introducing nonlinearities into the model. So we have a 1D convolution filter that slides its window vertically with stride=1 along the first dimension of the market cube to produce a feature map column vector . denotes the jth kernel; in our experiments, we use 192 different kernels.
Given a target stock embedding , the attention model will return a weighted arithmetic mean of the {}, where the weights are chosen according the relevance of each to the stock embedding . We use the additive attention mechanism explained in (?). In equation 3, and are learned attention parameters.
(3) 
We compute attention weights using a softmax function
(4) 
The conditioned market embedding is calculated by
(5) 
Intuitively, each filter serves to summarize correlations between different stocks across multiple indicators. Each kernel is in charge of finding different types of patterns among the raw indicators. The attention mechanism of the CNN is responsible for selecting the patterns on which to focus for a particular target stock. This conditioned market embedding summarizes the information contained in the market cube , focusing on a group of stocks in the market that links to our predicted stock.
The attentionbased CNN relays its learned dense market performance embedding to the next stage: a longshortterm memory recurrent neural network (LSTM), which models the temporal dependencies in a sequence of multidimensional features of a specific . Recurrent neural networks (?; ?) are widely applied in natural language processing applications to capture longrange dependencies in time series data (?; ?; ?; ?). The attention mechanism (?; ?) has become an indispensable component in time series modeling with recurrent neural networks, which provide an evolving view of the input sequence as the output is being generated. In our stock market modeling applications, we want to be able to summarize the ‘trend’, while being able to pick up important time points of the past history, therefore we decided to use recurrent neural networks to model temporal aspects of the stock market.
The sequence of multidimensional features for () are sequentially encoded using a LSTM cell of size 25. The mechanism of the LSTM is defined as:
We treat the last hidden LSTM output as the representation of the target ’s performance in the past days, delineated by .
Finally, we feed both our learned dense market performance embedding and stock performance embedding to a feedforward neural network. They are nonlinearly transformed separately in the first layer and concatenated together to predict the target stock return:
(6) 
Evaluation
We conducted an evaluation of our MARNN.
For labels, we built stock return matrices for each market image. We used 1day and 5day returns for shortterm predictions and 15day and 30day returns for longterm predictions, denoted as . In order to reduce the effect of volatility on returns, we divide the individual daily return by its recent past standard deviation of return (cf the Sharpe ratio). The moving window size to calculate the standard deviation is 10, see equation (7).
(7) 
We divided our input market images into training, validation and backtest sets by time, as shown in Table 2.
Training  Validation  Backtest  
Period  19992012  20122015  20152016 
#Trading Days  3265  754  504 
We trained our MARNN model with the following hyperparameters: a convolution stride size of 1; dimensionality of 100 for the trainable stock embeddings; a dimensionality of 32 for the attention vector of the convolutional neural work; a dimensionality of 40 for the final market representation vector; a cell size of 32 for the LSTM; hidden layers of dimensionality 100 and 50 respectively for our fully connected layers; ReLu nonlinearity; and a time window of 10. All the initialized weights were sampled from a uniform distribution [0.1,0.1]. The minibatch size was 10. The models were trained endtoend using the Adam optimizer (?) with a learning rate of 0.001 and gradient clipping at 5.
For benchmarking our MARNN model, we chose several standard machine learning models. We conducted two experiments. For both experiments, we report MSE of the % return prediction as our metric.
First, we compared the performance of models with and without market information. Linear regression, feedforward neural networks and support vector regression (?) serve as our market infofree comparison models. Our marketattention model relies solely on the learned market representation, , (i.e., it involves only the CNN with attention, and ignores the output of the LSTM, ). We used a linear kernel function in the SVR model with penalty parameter C=0.3. The feed forward neural network had two hidden layers of size 50 and sigmoid nonlinearity. The LSTM cell size was 25 in the feedforward recurrent neural network.
We found that market awareness can be successfully modeled to improve stock price prediction. As illustrated in Table 3, at every time interval (1 day, 5 days, 15 days and 30 days) the marketattention model has lower MSE than the other models, which have no information about the market as a whole.
Model  n=1  n=5  n=15  n=30  
Linear Regression  3.711  6.750  12.381  18.429  

2.411  4.917  8.149  11.930  

1.727  3.952  6.967  9.088  

1.426  2.896  5.854  7.923  

0.91  1.63  4.383  5.114 
Second, we compared the marketattention model with our MARNN model to show the value of explicitly modeling temporal dependencies. We found that temporal awareness can be successfully used in a marketaware model for improved stock price prediction. As illustrated in Table 4, our MARNN model has lower MSE than our baseline marketattention CNN.
Model  n=1  n=5  n=15  n=30  

0.91  1.63  4.383  5.114  

0.790  1.210  3.732  4.523 
Generic Market Representation: MarketSegNet
Based on our finding from the previous section that market awareness leads to improved stock prediction accuracy, we propose a novel method to learn a generic market representation in a endtoend manner. The market representation learning problem is to convert market images (potentially of variable dimensions) to fixedsize dense embeddings for general purpose use. As a test of the fidelity of this representation, from the generic market embedding it should be possible to reconstruct the input market image pixel wise.
Inspired by (?), we developed a deep fully convolutional autoencoder architecture for pixelwise regression. The convolutional encoderdecoder model was originally proposed for scene understanding applications, such as semantic segmentation (?; ?) and object detection (?). A convolutional encoder builds feature representations in a hierarchical way, and is able to take in images of arbitrary sizes, while a convolutional decoder is able to produce an image of a corresponding size. By using convolutional neural networks, the extracted features exhibit strong robustness to local transformations such as affine transformations and even truncations (?). In a stock market modeling application, after representing each day’s overall stock market as an image, we believe that (1) building features in a hierarchical way can provide a better summary of the market, since stocks exhibit an inherent hierarchical structure, and (2) robustness to local transformations is desirable, since the stock universe is constantly changing, with new companies being added, and other companies removed, while we do not want the overall market representation to be greatly affected by the addition or removal of a single company.
Since our market image has a different spatial configuration comparing to a normal image, we customize the structure of our endtoend architecture. The encoder network is composed of traditional convolutional and pooling layers which are used to reduce the resolution of the market image through maxpooling and subsampling operations. Meanwhile, the encoder network stores the maxpooling indices used in the pooling layer, to be applied in the upsampling operation in the corresponding decoder network. The decoder network upsamples the encoder output using the transferred pool indices to produce sparse feature maps, and uses convolutional layers with a trainable filter bank to densify the feature map so as to recover the original market image. Since companies are grouped in the market image by sector, maxpooling in the encoder network can capture the trends of stocks in the same sector.
To evaluate MarketSegNet, we compare its ability to reconstruct input market images with that of a wellknown algorithm for dimensionality reduction, Principal Component Analysis (PCA). PCA uses singular value decomposition to identify variables in the input data that account for the largest amount of variance. We used our training data both to train our MarketSegNet model and to fit a PCA model. We then used the MarketSegNet and PCA models to compress the market images in our test data, and to reconstruct images of the dimensionality of the input images. We compared the reconstruction error rates of PCA and our MarketSegNet model. Since we varied the sizes of our learned market embeddings from 16 to 128, for each size we created a PCA model with that number of principal components.
Our results are shown in Figure 5. For every size of market embedding, MarketSegNet has lower reconstruction error than PCA.
Conclusions and Future Work
In this paper, we present a method for constructing a ‘market image’ for each day in the stock market. We then describe two applications of this market image:

As input to MLbased models for stock price prediction. We demonstrate (a) that market awareness leads to reduced error vs nonmarketaware methods, and (b) that temporal awareness across stacks of market images leads to further reductions in error.

As input to a MLbased method for constructing generic market embeddings. We show that the learned market embeddings are better able to reconstruct the input market image than PCA across a range of dimensionality reductions, indicating that they capture more information about the input market image.
We should emphasize that our baseline marketaware CNN, our MARNN model, and our MarketSegNet market embeddings do not represent trading strategies. They are agnostic to trading costs, lost opportunity cost while out of the market, and other factors that matter with an active trading strategy. That said, they may provide information that is useful in trading strategies. Other research groups that have used the models described here have reported improved performance in predicting the directionality of stock price moves on earnings day, and in assessing which events will move markets. We leave further exploration of the applications of these models to future work.
References
 [Alkhatib et al. 2013] Alkhatib, K.; Najadat, H.; Hmeidi, I.; and Shatnawi, M. K. A. 2013. Stock price prediction using knearest neighbor (kNN) algorithm. International Journal of Business, Humanities and Technology 3(3):32–44.
 [Badrinarayanan, Kendall, and Cipolla 2017] Badrinarayanan, V.; Kendall, A.; and Cipolla, R. 2017. Segnet: A deep convolutional encoderdecoder architecture for image segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence 39(12):2481–2495.
 [Bahdanau, Cho, and Bengio 2014] Bahdanau, D.; Cho, K.; and Bengio, Y. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473.
 [Cho et al. 2014] Cho, K.; Van Merriënboer, B.; Bahdanau, D.; and Bengio, Y. 2014. On the properties of neural machine translation: Encoderdecoder approaches. arXiv preprint arXiv:1409.1259.
 [Drucker et al. 1997] Drucker, H.; Burges, C. J.; Kaufman, L.; Smola, A. J.; and Vapnik, V. 1997. Support vector regression machines. In Advances in Neural Information Processing Systems, 155–161.
 [Dyer et al. 2015] Dyer, C.; Ballesteros, M.; Ling, W.; Matthews, A.; and Smith, N. A. 2015. Transitionbased dependency parsing with stack long shortterm memory. arXiv preprint arXiv:1505.08075.
 [Graham and Dodd 2002] Graham, B., and Dodd, D. 2002. Security Analysis. McGraw Hill Professional.
 [Hochreiter and Schmidhuber 1997] Hochreiter, S., and Schmidhuber, J. 1997. Long shortterm memory. Neural Computation 9(8):1735–1780.
 [Kingma and Ba 2014] Kingma, D. P., and Ba, J. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.
 [Lee 2001] Lee, J. W. 2001. Stock price prediction using reinforcement learning. In Proceedings of the IEEE International Symposium on Industrial Electronics, volume 1, 690–695.
 [Long, Shelhamer, and Darrell 2015] Long, J.; Shelhamer, E.; and Darrell, T. 2015. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 3431–3440.
 [Luong, Pham, and Manning 2015] Luong, M.T.; Pham, H.; and Manning, C. D. 2015. Effective approaches to attentionbased neural machine translation. arXiv preprint arXiv:1508.04025.
 [Malkiel and Fama 1970] Malkiel, B. G., and Fama, E. F. 1970. Efficient capital markets: A review of theory and empirical work. The Journal of Finance 25(2):383–417.
 [Mikolov et al. 2010] Mikolov, T.; Karafiát, M.; Burget, L.; Cernockỳ, J.; and Khudanpur, S. 2010. Recurrent neural network based language model. In Proceedings of the Annual Conference of the International Speech Communication Association.
 [Mojaddady, Nabi, and Khadivi 2011] Mojaddady, M.; Nabi, M.; and Khadivi, S. 2011. Stock market predictiinterspeechon using twin gaussian process regression. Technical report, Department of Computer Engineering, Amirkabir University of Technology, Tehran, Iran.
 [Murphy 1999] Murphy, J. 1999. Technical Analysis of the Financial Markets: A Comprehensive Guide to Trading Methods and Applications. New York Institute of Finance Series. New York Institute of Finance.
 [Murphy 2011] Murphy, J. 2011. Intermarket analysis: profiting from global market relationships, volume 115. John Wiley & Sons.
 [Ren et al. 2015] Ren, S.; He, K.; Girshick, R.; and Sun, J. 2015. Faster RCNN: Towards realtime object detection with region proposal networks. In Advances in Neural Information Processing Systems, 91–99.
 [Sutskever, Vinyals, and Le 2014] Sutskever, I.; Vinyals, O.; and Le, Q. V. 2014. Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Systems, 3104–3112.
 [Yang et al. 2017] Yang, Z.; Hu, Z.; Deng, Y.; Dyer, C.; and Smola, A. 2017. Neural machine translation with recurrent attention modeling. EACL 2017 383.

[Zheng, Yang, and Tian 2017]
Zheng, L.; Yang, Y.; and Tian, Q.
2017.
SIFT meets CNN: A decade survey of instance retrieval.
IEEE Transactions on Pattern Analysis and Machine Intelligence
40(5):1224–1244.