Ingesting High-Velocity Streaming Graphs from Social Media Sources
††thanks: This work is partly funded by NSF Grant 1738411, and the AWESOME Project at the San Diego Supercomputer Center
Many data science applications like social network analysis use graphs as their primary form of data. However, acquiring graph-structured data from social media presents some interesting challenges. The first challenge is the high data velocity and bursty nature of the social media data. The second challenge is that the complex nature of the data makes the ingestion process expensive. If we want to store the streaming graph data in a graph database, we face a third challenge – the database is very often unable to sustain the ingestion of high-velocity, high-burst data. We have developed an adaptive buffering mechanism and a graph compression technique that effectively mitigates the problem. A novel aspect of our method is that the adaptive buffering algorithm uses the data rate, the data content as well as the CPU resources of the database machine to determine an optimal data ingestion mechanism. We further show that an ingestion-time graph-compression strategy improves the efficiency of the data ingestion into the database. We have verified the efficacy of our ingestion optimization strategy through extensive experiments.
A significant fraction of data used in Data Science today comes from streaming data sources. These include data from social media streams like Facebook and Twitter, IOT data from sensors, stock market data from stock exchanges and financial information sources. There are two broad categories of data science research for streaming data – real-time analytics and non-real-time analytics. In the first case, analytics tasks can be performed on a small window of in-flight data as the data streams in. For example, Bifet et  develop a streaming decision tree technique that operates on an in-memory snapshot of data and adapts to changes in streams. In the second case, although the data is collected in real time, a data ingestion system needs to collect data for some time before the analytics operations can be executed. For example, computing hourly frequency distribution of hashtags from Twitter would require the system to store data because due to the high velocity of Twitter streams, an hour’s worth of data will often exceed the memory capacity of the streaming system. In this case, the ingestion of the streaming data must keep up with the fluctuating data rates of the stream so that it does not have to resort to any load shedding scheme that were applied to a previous generation of stream processing systems (e.g., ).
The reality, however, is that when the data gets more complex and needs pre-processing before storage, there a distinct bandwidth gap between the data rate of the stream producer and the ingestion capability of the store that houses the streaming data. In this paper, we investigate the nature of and mitigation strategies for this bandwidth gap problem in the context of streaming social media data (JSON stream) that is transformed into a graph and stored in a graph database for analytics operations. We consider graph data to be more complex because, in contrast to relational records without strong integrity constraints, the nodes and edges of a graph are not independent of each other. Therefore, while edges of the graph arrive in random order in streaming data, the DBMS has to spend additional time to ensure that two neighboring edges ingested at different times from the stream are connected to the same node inside the DBMS. Thus, the ingestion cost of graph data is higher, leading to the bandwidth gap between the ingestion rate and the storage rate of streaming graphs. Interestingly, while processing high-volume graph data for network analysis is an emerging research area [8, 16], ingestion optimization of streaming graphs is still an unexplored area.
Example Use Case. To motivate the problem, we present an illustrative use case from the domain of political science. The objective of the study is to understand patterns in political conversations and public opinions on Social Media in USA. One of the data sources for the study is Twitter; we use Twitter’s streaming API (1% sample) from which we filter tweets by using a set of domain-specific keywords. During politically charged times, the rate of Tweets received show a bursty behavior. Figure 1 shows the rate of data arrival over a 25-second period.
The figure shows the bursty nature of Tweet arrival and a peak value over 2500. This is in comparison with the average rate of 60 tweets/sec (1% of 5787 tweets/minute ) available in real-time using the Twitter API. This tweet stream is accepted by an ingestor process, transformed into a graph model by a transformer process and then pushed to a backend graph DBMS (Neo4J). As the velocity of tweets increases and the transformer process pushes increasingly more data to the store, the Neo4J machine reaches a 100% user time (Figure 2) while there is a small decrease in memory availability. The deterioration of system efficiency is further evidenced from the speed of context switching (Figure 3). With no intervention, this results in a significant slowdown or a total failure of the DBMS server.
This system failure points to a largely ignored aspect of the data science infrastructure – with all the advances in improving “Big Variety” problems, ingestion of streaming graph data into graph databases has remained unaddressed [4, 15].
In this paper, we address the above form of system failure by combining two completely different approaches to the problem:
Adaptive Buffering. We develop an adaptive buffering scheme that monitors both the data arrival rate and the CPU load of the server and balances the effective ingestion load transmitted to the server.
Graph Compression. We exploit the information redundancy in the social media data content to compress the graph load that would be ingested by the DBMS.
For adaptive buffering, we create a predictive model of how the CPU load will be impacted by the buffer size and the variation of data content as the data rate fluctuates. We find that the buffer size itself is controlled by a metric related to the diversity of the data content when the data is transformed to a graph. For graph compression, we make use of the observation that during a burst, a large number of users post about the correlated content and in the process, reuse hashtags created by others.
We show that using this combined approach, we can largely adapt to the velocity and burstiness, and only on rare occasions resort to spilling the incoming data to local storage of the ingestor machine.
Ii Ingesting Streaming Graphs
Our stream processing engine has a pipelined architecture as shown in Figure 4. In the following, we first present the building blocks of this architecture and then present the controlling algorithms for managed stream ingestion.
Ii-a Data Processing Pipeline
The data processing pipeline, consisting of seven steps, is developed on top of a threaded, multiprocessing and partially distributed environment. The primary computation in the pipeline is data manipulation and transformation which is executed by breaking up the stream into mini-batches.
Filter: The ingestion process starts by filtering out data items (tweets) that do not satisfy the semantic requirements of the system. The filter is applied in two stages. The first set of filters is applied as a parameter of the streaming API provided by the data source (Twitter, in this case). In our example, we provide a set of keywords to the Twitter API for a specific application, and receive a data stream satisfying the filter. In the second phase, we apply a set of analysis-specific filtering criteria (e.g., remove tweets with only emojis). The choice of filtering criteria has a profound effect on the effective data rate of a stream. In our example, our keywords involve names of US politicians and some political issues. Therefore, whenever a political issue grabs public attention, we see a significant burst in the rate at which the data streams into our system. We have observed 15-45% velocity fluctuations on a normal day and over 250% fluctuation on extremely busy days.
Buffer: The filtered data is collected in a buffer. As mentioned before, the size of the buffer is an important parameter in the efficiency of data ingestion. Using a buffer is a standard strategy, we determined that using a fixed buffer size does not effectively handle the problem of efficient ingestion management. In the case of a burst period, the CPU of the DBMS machine quickly goes to 100% load and a large buffer is needed to absorb the bursting content and control the CPU load. However, using a large buffer also delays the ingestion process, because when the content of a large buffer is transmitted to the CPU, its ingestion load increases. To counter this dilemma, we have developed an adaptive buffer management strategy described in the next subsection that senses the impact of an upcoming burst and adaptively modifies the size of the buffer. We show that the factors impacting the required buffer size depends significantly on the data rate but also on the content of the data. The use of content in controlling the buffer size distinguishes our work from the traditional buffer management algorithms that only perform congestion control [10, 18].
Model Transformation: Model transformation is the process of transforming data from its native form to a target form that conforms to the data model supported by the destination storage. In our case, tweets enter the system as a stream of tree-object (JSON) and needs to be converted into a property graph (a graph where both nodes and edges can have attributes and values). In this step, a tweet (sometimes a tweet set, as shown later) is manipulated to construct typed nodes, labeled edges, node properties and edge properties. Figure 6 shows an example of model transformation, where the JSON elements called user and tweet become nodes, but hashtag, a JSON property, is unnested and transformed into nodes because in the target graph model, a hashtag will be shared by a number of nodes. The edges of the graph, namely owner, mentioned, hashtag-used-in and mentioned-with-ht are constructed from the JSON content. For instance the edge mentioned-with-ht connects a hashtag with a user who is mentioned in the tweet. In general, the model transformation uses a configuration file that specifies the mapping between the input and target data. Fig. 5 shows the node types and node properties the target graph must have, and a mapping section that specifies how these properties can be populated from the input (e.g., using the getName) function.
Batch Optimizer: The model-transformed data is prepared for ingestion by wrapping the data into INSERT clauses for the graph database. However, the actual insertion process is expensive because of the ingestion latency of the target DBMS. This is managed by grouping the INSERT operations into “batches”. The batch optimization process determines the optimal batch size to improve system efficiency. Mini-batching [7, 13, 19] is a standard strategy for optimizing throughput during ingestion. However, we exploit the observation that although the number of tweets increase during a burst, there is a high degree of redundancy among them. This presents an optimization opportunity because, the redundant portions of a graph must be ingested only once. In our case, this optimization takes the form of dynamic graph compression which utilizes the fact nodes like user and hashtag should be computed only once during batch creation.
Graph Ingestor: The graph ingestor has two parts. The first part manipulates the data structures of the model transformation step and constructs the ingestion instructions so that the graph can be ingested by the target DBMS. The construction implements the graph compression method. The second part is an interface between our pipeline (Fig. 4) and the graph DBMS. The ingestor pushes the data to the DBMS ingestion pool where the pool size is predefined and managed by the third party connectors. In our example, we choose the Neo4j DBMS, which uses bolt as a graph connector.
Iii Ingestion Control
In this section, we present the design principle behind the adaptive buffer control and graph compression algorithms to improve ingestion efficiency.
Iii-a Modeling the Ingestion Problem
Our strategy to manage the buffer for the streaming data, we first need to establish the factors that govern the buffer size. Let us first define a set of relevant parameters.
Graph Density() : The density of a graph is the ratio of edges of to the maximum possible number of edges that can be induced by the nodes of . Thus, density where is the cardinality function.
Ingestion Buffer Size() : The ingestion buffer is the memory space used for all pre-processing operations on the streaming data, including filtering and model transformation.
Effective Buffer/Output Buffer() : The effective buffer (or output buffer) is the buffer that contains the output of the model transformation.
Bucket Diversity Ratio () : A bucket is a mini-batch of graph data that will be sent to the database for ingestion at time . The bucket to be sent immediately has time index 0, and the bucket to be sent right after has index 1, and so forth. The diversity of a bucket is the proportion of new nodes (e.g., new hashtags) that appear in that bucket. The bucket diversity ratio is the average ratio of new nodes observed over temporal buckets.
Based on the above parameters, the model to predict the effective buffer size is
Notice that the model does not use as a variable because is generated from . To set up the model, we assume that the model function does not depend on time, but the parameters need to be dynamically determined at each time chunk. We further assume that the effects of and on linearly separable. Thus,
where the functions and their linear coefficients and need to be learned from the data. The result of the prediction model is presented in detail in Section IV. Once the parameters of this model are determined, we need to estimate how the buffer size i.e., the volume of data to be sent to the graph database for ingestion, impacts the stability of system resources on the DBMS side. The obvious candidate performance metrics to be considered on the DBMS side are memory, CPU user time (called CPU-usage later), context switching of the CPU, and interrupt per second. To simplify the model, we experimentally observe these metrics (Fig. 7) over time, where no buffer control is exercised over input streams. A comparison of these performance metrics show that the CPU-usage rises from about 40% to 100% in less than a second as a the number of ingested data records (i.e., the effective buffer size) increases. This effectively increases the delay time because the CPU spends longer updating the database.
The ingestion delay is the time gap between a record appearing at the stream, and the the record is ready for the query. In other words, is the total time that the data stays inside the ingestion pipeline. There are two factors responsible for this delay – buffer latency and system delay. Buffer latency refers to the time the time delay of a data item in the buffer due to the effective buffer size, while system delay refers to the delay that occurs because the CPU load was too high for the previous mini-batch, that results in a delay to get/process the next mini-batch. Let us assume that at any time point delay is the sum of bucket delay and system delay . Hence, the total system delay over time units is:
If the expected value of CPU-usage at th time-point is and the effective buffer size is . Then is the change of CPU usage at . Since we intend to regulate the CPU use of the DBMS machine even when the streaming data is very large, our goal is to bound the value of to achieve system stability. We observe that monotonically increases with the increase of , the effective buffer size of the ingestion control system increases. However, the nature of this monotonic function needs to be determined by a second predictive model, of the form
Substituting Eq. 2 from the first prediction model, we get
In Section IV, we experimentally estimate the form of the model and the parameters of these equations, and demonstrate that how we can effectively control the streaming ingestion problem.
In this section we present the algorithms referred to in previous subsections. The first algorithm 1 is the algorithm for model transformation (Section II-A) where a JSON object is manipulated to construct a property graph. In the process, we implement the graph compression strategy mentioned in Section I. The second algorithm implements the buffer control technique based on the prediction models from Section III-A. The third algorithm controls the actual ingestion process that transmits data from the effective buffer to the DBMS server.
Graph Model Transformation Algorithm: Designed to be flexible, the graph model transformation algorithm tasks as input an XML-structured mapping file and a data extraction library which can parse an input data object and extract its sub-elements as needed. This extraction process depends on the data model of the input data file, and is not specific to any particular data set. In our case, the extraction library operates on any JSON file. The specificity of the problem-specific input and output is provided by the mapping file. This makes the translation more “portable” – to choose a different data source (e.g., Reddit) that conforms to the same data model, one would only need to change the mapping file. We choose an in-memory edge-centric data structure to represent the graph. The primary task of the algorithm is to extract information to populate an edge table and an indexed node list, followed by the insertion instructions from this in-memory representation of the property graph. Figure 9 represents the structure of the edge table. Each edge has a unique id, start node, end node, start node properties, end node properties, and edge properties. Node and edge properties stored as a simple MAP object where the ‘key’ is the name of the property and ‘value’ is the property value. In addition, a set of table-level metadata like node density and diversity are computed for the edge table. We use a special property ‘count’ to handle duplicate edges. When we encounter a duplicate edge, we increase the value of the ‘count’ (line 20 of Algorithm 1). The duplicate detection is handled by the procedure INSERTEDGE() (line 13 of Algorithm 1). The algorithm keeps a node index to search nodes, and a list of connected nodes together. The algorithm updates the node index in any insertion, while during the insertion it also searches for the duplicate edges. The indexed nodelist together with the deduplicated edge table are the necessary data structures used in the graph compression step described in Algorithm 3.
The createedge() algorithm works in batch mode. It takes a set of status or tweets, a map function, and extraction library as input. After initiation, the extraction function extracts nodes, node properties, edge properties (From line 2 to line 11) and passes it to the insertedge() function. The run time complexity of the algorithm is linear in the number of edges.
Buffer Controller Algorithm: The objective of the buffer control algorithm (Algorithm 2) is to improve the system stability during ingestion. Since graph ingestion is a CPU bound process, our algorithm maintains the ‘CPU-usage’ (the CPU utilization percentage for the user space) level within acceptable bounds (called and respectively). Specifically, as the date rate fluctuates, this algorithm controls the CPU load by adjusting the buffer size, within the range . The edge table computes diversity ratio, velocity, and the degree distribution of the nodes (line 17 to 20 ), and the Zabbix API supplies CPU-usage. Hence, the input of the algorithm is average CPU-usage data and the edge table. Depending on the data velocity and the diversity for a particular time range, we predict the actual buffer size by using multivariate linear regression. Next, we estimate the possible maximum buffer size from CPU-usage data and the “acceleration”, i.e., second derivative of data rate. We use linear regression to compute predicted CPU-usage. The steps of the buffer control algorithm are detailed as follows.
With the input, the algorithm estimates effective buffer size, expected CPU-usage, and the ‘velocity’, i.e., the first derivative of the data rate.
If the expected CPU-usage is higher than , increase the buffer size by (a constant in the range [0,1]) times the available memory.
It measures if the CPU-usage is times higher than . If so, it writes to the local disk, which we call data throttling. Here, and are system specific constants determined experimentally for our testbed.
If the expected CPU-usage is lower than , we push the data to the graph database.
While the buffer size is greater than , it decreases the buffer size by times the available memory. This increase the availability of the data because with a lower buffer size, the ingestion latency is improved.
If the CPU is times lower than , it reads from the disk where the data was stored during throttling, and pushes it forward to the DBMS server.
At every step of the above process, the buffer size, expected CPU, and velocity are supplied by the PREFMON function, which uses our prediction models the CPU user time.
Graph Insertion Algorithm: The graphpush method used to transmit the graph from the ingestion machine to the graph database is explained in Algorithm 3. The method converts the data from the edge table, node list and node properties to construct the insertion instruction using the create and merge statements of Cypher, the language our target database.
Algorithm 3 creates node and edge ingestion statements in Cypher (From line number 6 to 11 in 3) by extracting start node and end node from the edges of the edge table. It uses an indexed list(line 4) to ensure that nodes are created only once in the target database. For each commitment transaction, it also checks the integrity constraint that the nodes referred to in the edge table also exists in the node list. Since a commit to the DBMS may fail due to many practical reasons like network failure, the method stores data to the local memory until the timeout. It uses 3rd party data connectors to create a fault tolerant connection to the DBMS and to maintain a suitable data pool size at the DBMS.
Graph Compression: In the previous process, at the end of the edge list traversal, the algorithm creates a set of unique node insertion instruction and the corresponding edge instructions as well. Our process guarantees the removal of the duplicate node in this stage, while it compresses the number of edges during the edge table creation process. Hence, our algorithm ensures compressed and minimum ingestion for each of the buffers. In Section IV we have show how the compression reduces the ingestion load and demonstrate the interaction between the compression and the buffer size.
Environment and Deployment.
The experimental testbed for our work is a cloud computing environment (SDSC OpenStack cloud111https://www.sdsc.edu/services/ci/cloud.html) that runs CentOS 7. The deployment diagram in Figure 10 shows an ingestion server node, a database node with Neo4J 3.6 and a performance monitoring server (Zabbix 4.2 agent with json-rpc api support). Each node contains 2 VPUs and 32 GB memory and connected using high performance switches. The underlying processors are Intel Westmere (Nehalem-C) with 16384 KB Cache and around 2.2 GHz clock.
Data Set. The data set for our experiments come from the “Political Data Analysis” project at the San Diego Supercomputer Center. The project collects tweets continuously. In our experiments, we used two forms of data ingestion – (a) directly from the Twitter Stream at its natural rate, and (b) streaming data from tweets stored in files, where we programmatically control the streaming rate to test the limits of our solution. In both cases, the period of observation and control was 8 hours. The average velocity of tweets from the direct stream was 4.9 tweets per second, and the maximum rate was 23.78 tweets per second. In the simulation, we multiplied the velocity up to 5 times with 5% - 20% duplicate tweets.
Implementation Architecture. The stream processing architecture in our experimental setup (Figure 8) is designed as a producer-consumer model. Under the control of the ingestion controller module, the graph ingestor accepts data from the buffer and distributes it over multiple threads to construct the edge table concurrently. The results from the partial edge tables are collected in the Cypher statement buffer which performs the database commit operation. The ingestion controlled governs this process by using the performance monitoring services.
Iv-a Prediction Models
We have tested two prediction models for , the effective buffer size as function of the graph density and the bucket diversity ratio (Eq. 2) as well as the expected CPU-usage as a function of (Eq. 4). The model was tested with python SciKit learn222https://scikit-learn.org.
Expected Buffer Estimation(): We determined that the equation is modeled with is linear while fits best with a quadratic function. The linear parameters K[i] and R[i] were estimated as 0.597 and 1.48, with standard error 0.024 and 0.021 respectively.
Expected CPU-Usage Estimation(): Choosing an appropriate model for CPU-usage was a little more challenging. Table I shows the models we have tested for and their errors. Our experiments is the closest fit while a linear model is a close second, Figure 11 shows the observed values (X-axis) vs. the predicted values (Y-axis) the expected CPU-usage for 4 different settings of . It is seen that a low choice of produces unclear results but the prediction closely matches the observation for . As , we observe that while the prediction is still good, the model demonstrates that the CPU-usage takes a quantum jump, explaining the gap in the plot in the CPU-usage range [0.21, 0.35].
Iv-B Effect of Graph Compression
The experimental summary of the graph compression is shown in Figure 13: the X-axis represents the compression ratio (the effective count of insert instructions over the number of original tweets), and Y-axis represents the effective buffer size at that time. We can observe that for most cases, the effective compression rate (mean compression rate = 24.97%) varies between 15% and 35% . With increased buffer size, the impact of the compression is not as effective. We have observed that during a twitter storm (e.g., in January 2018 for the hashtag #ReleasetheMemo), when the graph density is high, the algorithm gives a better compression ratio.
Iv-C Performance Output of The Algorithm
The performance improvement of the final buffer is shown in Figure 12. One advantage using the political tweet data set is that the minimum data rate for tweets is high, and during peak hours there is a 4.5 fold increase in the data rate. In contrast with the uncontrolled CPU-usage, the experiment (Figure 12) shows that after our technique is applied, the CPU user time never reaches a spiking condition. As expected, each time the CPU touches the maximum allowable limit, the algorithm reduces it down by not pushing any data during that period. Further, the IPS and context switching of the CPU was in low and stable condition, while the memory usage for the entire observation period is generally low. We monitored the system performance on the ingestor machine as well 14, and observed that CPU and memory utilization is well within control.
In this paper, we addressed a bandwidth gap problem encountered in ingesting and storing social media data. We took advantage of the temporal clustering property of social media data (i.e., the fact that many similar nodes and edges are created during bursty periods) to compress the graph to save ingestion time, and dynamically adjusted to buffer to control the CPU load. Our work sits in the middle of graph analytics research underlying many data science applications [1, 14, 11] who use small data sets, and graph database research that promotes in-database graph analytics  who do not consider streaming input. We view the graph stream ingestion problem discussed in this paper as a component of optimized ingestion control in the AWESOME polystore system [5, 6] where multiple streams of heterogeneous data can flow into a component DBMS managed under the polystore. We expect that the general idea of using buffer control, data compression and resource monitoring for DBMS can be effectively applied. In future work, we expect to extend this work to cover a larger variety of data models and data stores.
Secondly, notice that in Algorithm 2, we form a data structure that contains generic graph properties like degree distribution for the time-slice of data available in the buffer. Metrics like these are the building blocks of more complex analytical measures developed by graph-centric research communities. In future work, we will materialize more of these temporally evolving properties and use them for the evolutionary analysis of the social media graph, community detection [1, 14, 11], and other graph analytics operations , which will benefit from our continuous computation of these “building block” measures.
Finally, our work primarily solves an infrastructure problem that generalizes beyond just social media data. As part of our future work, we will apply, and if needed, extend our system for other forms of structured and semi-structured streaming data (e.g., newswire data, lifelog data ).
-  N. Ayman, T. F. Gharib, M. Hamdy, and Y. Afify. Influence ranking model for social networks users. In International Conference on Advanced Machine Learning Technologies and Applications, pages 928–937. Springer, 2019.
-  B. Babcock, M. Datar, and R. Motwani. Load shedding for aggregation queries over data streams. In Proceedings. 20th International Conference on Data Engineering, pages 350–361. IEEE, 2004.
-  A. Bifet, J. Zhang, W. Fan, C. He, J. Zhang, J. Qian, G. Holmes, and B. Pfahringer. Extremely fast decision tree mining for evolving data streams. In Proc. of the 23rd ACM Int. Conf. on Knowledge Discovery and Data Mining (KDD), pages 1733–1742. ACM, 2017.
-  R. Calvillo, C. Denton, J. A. Breckman, and J. Palmer. Management system for high volume data analytics and data ingestion, Jan. 4 2018. US Patent App. 15/704,891.
-  S. Dasgupta, K. Coakley, and A. Gupta. Analytics-driven data ingestion and derivation in the AWESOME polystore. In Proc. of the IEEE Int. Conf. on Big Data, pages 2555–2564. IEEE, Dec. 2016.
-  S. Dasgupta, C. McKay, and A. Gupta. Generating polystore ingestion plans - A demonstration with the AWESOME system. In Proc. of the IEEE Int. Conf. on Big Data, pages 3177–3179, Dec. 2017.
-  R. Grover and M. J. Carey. Data ingestion in asterixdb. In EDBT, pages 605–616, 2015.
-  C.-Y. Gui, L. Zheng, B. He, C. Liu, X.-Y. Chen, X.-F. Liao, and H. Jin. A survey on graph processing accelerators: Challenges and opportunities. Journal of Computer Science and Technology, 34(2):339–371, 2019.
-  C. Gurrin, A. F. Smeaton, A. R. Doherty, et al. Lifelogging: Personal big data. Foundations and Trends® in information retrieval, 8(1):1–125, 2014.
-  M. Hirano and N. Watanabe. Traffic characteristics and a congestion control scheme for an atm network. International Journal of Digital & Analog Communication Systems, 3(2):211–217, 1990.
-  W. Inoubli, S. Aridhi, H. Mezni, M. Maddouri, and E. M. Nguifo. An experimental survey on big data frameworks. Future Generation Computer Systems, 86:546–564, 2018.
-  M. Kronmueller, D.-j. Chang, H. Hu, and A. Desoky. A graph database of yelp dataset challenge 2018 and using cypher for basic statistics and graph pattern exploration. In 2018 IEEE International Symposium on Signal Processing and Information Technology (ISSPIT), pages 135–140. IEEE, 2018.
-  J. Meehan, C. Aslantas, S. Zdonik, N. Tatbul, and J. Du. Data ingestion for the connected world. In CIDR 2017, 8th Biennial Conference on Innovative Data Systems Research, Chaminade, CA, USA, January 8-11, 2017, Online Proceedings, 2017.
-  F. S. Pereira, S. de Amo, and J. Gama. Evolving centralities in temporal graphs: a twitter network analysis. In 2016 17th IEEE International Conference on Mobile Data Management (MDM), volume 2, pages 43–48. IEEE, 2016.
-  D. S. Reiner, N. Nanda, and T. Bruce. Ingestion manager for analytics platform, Jan. 4 2018. US Patent App. 15/197,072.
-  S. Samsi, V. Gadepally, M. Hurley, M. Jones, E. Kao, S. Mohindra, P. Monticciolo, A. Reuther, S. Smith, W. Song, et al. Graphchallenge. org: Raising the bar on graph analytic performance. In 2018 IEEE High Performance extreme Computing Conference (HPEC), pages 1–7. IEEE, 2018.
-  G. Strickerâ. The 2014 #year on twitter. Twitter Blog, 2014.
-  A. Vishwanath, V. Sivaraman, and M. Thottan. Perspectives on router buffer sizing: Recent results and open problems. ACM SIGCOMM Computer Communication Review, 39(2):34–39, 2009.
-  X. Wang and M. J. Carey. An IDEA: an ingestion framework for data enrichment in asterixdb. CoRR, abs/1902.08271, 2019.