Commit f2f0dbc5 authored by Omran Saleh's avatar Omran Saleh
Browse files

minor changes

parent af3803e4
To provide a higher abstraction level, we have developed a Web application called \textbf{AutoStudio}. It is a user friendly and easy to use browser-based application to run cross-platform using HTML5, Draw2D touch\footnote{\url{http://www.draw2d.org/draw2d/index.html}}, and node.js\footnote{\url{http://www.nodejs.org}}. AutoStudio has several functionalities:
To provide a higher abstraction level, we have developed a Web application called \textbf{AutoStudio} as shown in Fig. \ref{fig:auto}. It is a user friendly and easy to use browser-based application to run cross-platform using HTML5, Draw2D touch\footnote{\url{http://www.draw2d.org/draw2d/index.html}}, and node.js\footnote{\url{http://www.nodejs.org}}. AutoStudio has several functionalities:
\begin{itemize}
\item It enables users to leverage the emerging \PipeFlow language graphically via a collection of operators (represented by icons) which could be simply ``dragged and dropped'' onto a drawing canvas. The user can assemble the operators in order to create a dataflow graph in a logical way and visually show how they are related, and from this graph, equivalent \PipeFlow script can be generated. By clicking on the operator icon, a pop-up window appears to let the user specify the parameters of \PipeFlow operators, which are required. Moreover, the user can display the help contents for each operator.
\item Contacting the \PipeFlow system to generate the right script (e.g., Storm, Spark Streaming, or PipeFabric scripts) based upon the user's selection of language from the dashboard page. This makes the user to be not aware of any of stream-processing language syntax and their constructs including \PipeFlow. By this application, the user can trigger the execution of the script through the \PipeFlow system via calling the respective engine. Moreover, it handles real-time stats including execution and performance results sent by the \PipeFlow system when the script is in execution. When the execution is complete, the application can send an email to the user.
\item Contacting the \PipeFlow system to generate the right program (e.g., Storm, Spark Streaming, or PipeFabric programs) based upon the user's selection of engine from the dashboard page. This makes the user to be not aware of any of stream-processing language syntax and their constructs including \PipeFlow. By this application, the user can trigger the execution of the script through the \PipeFlow system via calling the respective engine. Moreover, it handles real-time stats including execution and performance results sent by the \PipeFlow system when the program is in execution. When the execution is complete, the application can send an email to the user.
\item It provides the options of saving the generated scripts or flow-designs for future reference, loading the saved script and executing it whenever required.
\item An evaluation tool for the generated scripts where the user is interested in comparing as well as evaluating the performance of stream- processing systems in terms of throughput, latency, and resource consumption such as CPU and memory. The evaluation can be performed online using dynamic figures or offline using static figures.
\end{itemize}
......@@ -11,6 +11,7 @@ AutoStudio prominently uses open source softwares and frameworks. The client sid
\centering
\includegraphics[width=3.5in, height = 2in]{autostudio.eps}
\caption
{Front-end web application: AutoStudio}
{AutoStudio: the \PipeFlow web application}
\label{fig:auto}
\end{figure}
During the demo session, we will bring a laptop running our system and demonstrate
its capabilities and supported-services by employing real queries and datasets which can be interactively
explored by demo audience. Since the front-end in our system is a web-based application, the whole demonstration can be carried out via the laptop's browser. We will prepare some sample scripts expressed in \PipeFlow. The users will create these programs graphically by dragging and dropping the operators and connections as well as specifying the operator's parameters. The audience can switch between different generated scripts (i.e., different languages) without changing the original data flow script and observe the differences between these scripts constructs. To show real-time results, the generated scripts will be compiled and executed over the respective engine via our system. Moreover, the audience will notice real-time performance measurements for the engines through the existing of static and dynamic figures in the dashboard page.
\ No newline at end of file
explored by demo audience. Since the front-end in our system is a web-based application, the whole demonstration can be carried out via the laptop's browser. We will prepare some sample scripts expressed in \PipeFlow. The users will create these scripts graphically by dragging and dropping the operators and connections as well as specifying the operator's parameters. The audience can switch between different generated programs (i.e., for different engines) without changing the original data flow script and observe the differences between these programs constructs. To show real-time results, the generated programs will be compiled and executed over the respective engine via our system. Moreover, the audience will notice real-time performance measurements for the engines through the existing of static and dynamic figures in the dashboard page.
\ No newline at end of file
A wide range of (near) real-time applications process stream-based data including
financial data analysis, traffic management, telecommunication monitoring, environmental monitoring, the smart grid, weather forecasting, and social media analysis, etc. These applications focus mainly on finding useful information and patterns on-the-fly as well as deriving valuable higher-level information from lower-level ones from continuously incoming data stream to report and monitor the progress of some activities. In the last few years, several systems for processing streams of information, where each offering their own processing solution, have been proposed. It is pioneered by academic systems such as Aurora and Borealis~\cite{Abadi:2003:ANM:950481.950485} and commercial systems like IBM InfoSphere Streams or StreamBase. Recently, some novel distributed stream computing platforms have been developed based on data parallelization approaches, which try to support scalable operation in cluster environments for processing massive data streams. Examples of these platforms Storm~\cite{storm}, Spark Streaming ~\cite{spark}, and Flink~\cite{flink}. Though, these engines (SPE) provide abstractions for processing (possibly) infinite streams of data, they lack support for higher-level declarative languages. Some of these engines provide only a programming interface where operators and topologies have to be implemented in a programming language like Java or Scala. Moreover, to build a particular program (i.e., query) in these systems, the users should be expert and should have a deeper knowledge of the syntax and programming constructs of the language, especially, if the system supports multiple languages. Therefore, no time and effort savings can be achieved as the user needs to proceed by writing each programming statements correctly. To make the life much easier, the current trend in data analytics should be the adopting of the "Write once, run anywhere" slogan. This is a slogan first-mentioned by Sun Microsystems to illustrate that the Java code can be developed on any platform and be expected to run on any platform equipped with a java virtual machine (JVM). In general, the development of various stream processing engines raises the question whether we can provide an unified programming model or a standard language where the user can write one steam-processing script and he/she expects to execute this script on any stream-processing engines. By bringing all these things together, we provide a demonstration of our solution called \PipeFlow. In our \PipeFlow system, we address the following issues:
financial data analysis, traffic management, telecommunication monitoring, environmental monitoring, the smart grid, weather forecasting, and social media analysis, etc. These applications focus mainly on finding useful information and patterns on-the-fly as well as deriving valuable higher-level information from lower-level ones from continuously incoming data stream to report and monitor the progress of some activities. In the last few years, several systems for processing streams of information, where each offering their own processing solution, have been proposed. It is pioneered by academic systems such as Aurora and Borealis~\cite{Abadi:2003:ANM:950481.950485} and commercial systems like IBM InfoSphere Streams or StreamBase. Recently, some novel distributed stream computing platforms have been developed based on data parallelization approaches, which try to support scalable operation in cluster environments for processing massive data streams. Examples of these platforms Storm~\cite{storm}, Spark Streaming ~\cite{spark}, and Flink~\cite{flink}. Though, these engines provide abstractions for processing (possibly) infinite streams of data, they lack support for higher-level declarative languages. Some of these engines provide only a programming interface where operators and topologies have to be implemented in a programming language like Java or Scala. Moreover, to build a particular program (i.e., query) in these systems, the users should be expert and should have a deeper knowledge of the syntax and programming constructs of the language, especially, if the system supports multiple languages. Therefore, no time and effort savings can be achieved as the user needs to proceed by writing each programming statements correctly. To make the life much easier, the current trend in data analytics should be the adopting of the "Write once, run anywhere" slogan. This is a slogan first-mentioned by Sun Microsystems to illustrate that the Java code can be developed on any platform and be expected to run on any platform equipped with a java virtual machine (JVM). In general, the development of various stream processing engines raises the question whether we can provide an unified programming model or a standard language where the user can write one steam-processing script and he/she expects to execute this script on any stream-processing engines. By bringing all these things together, we provide a demonstration of our solution called \PipeFlow. In our \PipeFlow system, we address the following issues:
\begin{itemize}
\item Developing a scripting language that provides most of the features of stream-processing scripting languages, e.g., Storm and Spark Streaming. Therefore, we have chosen a dataflow language called \PipeFlow. At the beginning, this language was intended to be used in conjunction with a stream processing engine called PipeFabric \cite{DBIS:SalBetSat14year2014,DBIS:SalSat14}. Later, it is extended to be used with other engines. The source script written in \PipeFlow language is parsed and compiled and then a target program (i.e., for Spark Streaming and Storm as well as PipeFabric) is generated based upon user's selection. This target script is equivalent in its functionalities to the original \PipeFlow program. Once the target program is generated, the user can execute this program in the specific engine.
\item Mapping or translating a \PipeFlow script into other scripts necessitates the existing of each operator in \PipeFlow to be implemented in the target engine. Since \PipeFlow contains a set of predefined operators, all of these operators have been implemented directly or indirectly in that engine.
\item Developing a scripting language that provides most of the features of stream-processing scripting languages, e.g., Storm and Spark Streaming. Therefore, we have chosen a dataflow language called \PipeFlow. At the beginning, this language was intended to be used in conjunction with a stream processing engine called PipeFabric \cite{DBIS:SalBetSat14year2014,DBIS:SalSat14}. Later, it is extended to be used with other engines. The source script written in \PipeFlow language is parsed and compiled and then a target program (i.e., for Spark Streaming and Storm as well as PipeFabric) is generated based upon user's selection. This target program is equivalent in its functionalities to the original \PipeFlow script. Once the target program is generated, the user can execute this program in the specific engine.
\item Mapping or translating a \PipeFlow script into other programs necessitates the existing of each operator in \PipeFlow to be implemented in the target engine. Since \PipeFlow contains a set of predefined operators, all of these operators have been implemented directly or indirectly in that engine.
\item Providing a flexible architecture for users for extending the system by supporting more engines as well as new operators. These extensions should be integrated in the system smoothly.
\item Developing a front-end web application to enable users who have little experience in the \PipeFlow language to express the script or the program and its associated processing algorithm and data pipeline graphically.
\item Developing a front-end web application to enable users who have little experience in the \PipeFlow language to express the program and its associated processing algorithm and data pipeline graphically.
\end{itemize}
One of the useful applications for this approach is helping the developers to evaluate various stream processing systems. Instead of writing several scripts, which should perform the same task, for different systems manually, writing a single script in our approach will help to get the same result faster and more efficiently.
The remainder of the paper is structured as follows: In Sect.~\ref{sec:pipeflow}, we introduce the \PipeFlow language, the system architecture, and an example for the translation between scripts. Next, in Sect.~\ref{sec:app}, we describe our front-end application and give details about its design and provided functionalities. Finally, the planned demonstration is described in Sect.~\ref{sec:demo}.
One of the useful applications for this approach is helping the developers to evaluate various stream processing systems. Instead of writing several programs, which should perform the same task, for different systems manually, writing a single script in our approach will help to get the same result faster and more efficiently.
The remainder of the paper is structured as follows: In Sect.~\ref{sec:pipeflow}, we introduce the \PipeFlow language, the system architecture, and an example for the translation between \PipeFlow script and engine programs. Next, in Sect.~\ref{sec:app}, we describe our front-end application and give details about its design and provided functionalities. Finally, the planned demonstration is described in Sect.~\ref{sec:demo}.
......@@ -67,7 +67,7 @@ Omran Saleh\\
\maketitle
\begin{abstract}
Recently, some distributed stream computing platforms have been
developed for processing massive data streams such as Storm and Spark Streaming. However, these platforms lack support for higher-level declarative languages and provide only a programming interface. Moreover, the users should be well versed of the syntax and programming constructs of each language in these platforms. In this paper, we are going to demonstrate our \PipeFlow system. In \PipeFlow system, the user can write a stream-processing script (i.e., query) using a higher-level dataflow language. This script can be translated to different stream-processing scripts written in different languages that run in the corresponding engines. In this case, the user is only willing to know a single language, thus, he/she can write one steam-processing script and expects to execute this script on different engines.
developed for processing massive data streams such as Storm and Spark Streaming. However, these platforms lack support for higher-level declarative languages and provide only a programming interface. Moreover, the users should be well versed of the syntax and programming constructs of each language in these platforms. In this paper, we are going to demonstrate our \PipeFlow system. In \PipeFlow system, the user can write a stream-processing script (i.e., query) using a higher-level dataflow language. This script can be translated to different stream-processing programs that run in the corresponding engines. In this case, the user is only willing to know a single language, thus, he/she can write one steam-processing script and expects to execute this script on different engines.
\end{abstract}
......
In this section, we provide a description of our \PipeFlow language and the system architecture in addition to providing an example of mapping between a \PipeFlow script to PipeFabric and Storm scripts.
In this section, we provide a description of our \PipeFlow language and the system architecture in addition to providing an example of mapping between a \PipeFlow script to PipeFabric and Storm programs.
\subsection{\PipeFlow Language}
\PipeFlow language is a dataflow language inspired by Hadoop's Pig Latin \cite{Olston2008}. In general, a \PipeFlow script describes a directed acyclic graph of dataflow operators, which are connected by named pipes. A single statement is given by:
\begin{alltt}
......@@ -61,19 +61,19 @@ where \texttt{limit} parameter limits the number of results returned.
\caption{\PipeFlow architecture}
\label{fig:arch}
\end{figure}
In our approach, we have adopted the automated code translation (ACT) technique by taking an input source script written in \PipeFlow language and converting it into an output source script in another language. In general, \PipeFlow system is written in Java and depends heavily on ANTLR\footnote{\url{http://www.antlr.org}} and StringTemplate\footnote{\url{http://www.stringtemplate.org}} libraries. The former generates a \PipeFlow language parser that can build and walk parse trees whereas the latter generates code using pre-defined templates. Basically, the following components are used to achieve the translation of \PipeFlow source script to equivalent target scripts (PipeFabric, Spark Streaming, or Storm): \emph{(1)} parser \emph{(2)} flow graph \emph{(3)} template file and \emph{(4)} code generator. The latter two components are specific to target scripts and differ from each other depending on the target code to be generated. Therefore, for every target language a separate template file and code generator are created.
In our approach, we have adopted the automated code translation (ACT) technique by taking an input source script written in \PipeFlow language and converting it into another output program. In general, \PipeFlow system is written in Java and depends heavily on ANTLR\footnote{\url{http://www.antlr.org}} and StringTemplate\footnote{\url{http://www.stringtemplate.org}} libraries. The former generates a \PipeFlow language parser that can build and walk parse trees whereas the latter generates code using pre-defined templates. Basically, the following components are used to achieve the translation of \PipeFlow source script to equivalent target programs (PipeFabric, Spark Streaming, or Storm): \emph{(1)} parser \emph{(2)} flow graph \emph{(3)} template file and \emph{(4)} code generator. The latter two components are specific to target programs and differ from each other depending on the target code to be generated. Therefore, for every target code a separate template file and code generator are created.
\textbf{Role of Components}: The roles and functionalities of each above-mentioned components are described below and shown in Fig. \ref{fig:arch}.
\begin{description}
\item[Parser:] This component simply does the lexical analysis by parsing the input program written in \PipeFlow language and identifying the dataflow graph instances from the program. Initially, ANTLR generates a parser for the \PipeFlow language automatically based on the \PipeFlow grammar. This parser creates a parse tree which is the data structures representing how the grammar matches the \PipeFlow script. Additionally, ANTLR automatically generates a tree walker interface which can be used to visit the nodes of the parse tree. A new listener interface, which implements the parent interface, is used to visit the nodes in order to construct the corresponding flow node instances during the tree traversal. From these flow node instances, the flow graph object can be created.
\item[Parser:] This component simply does the lexical analysis by parsing the input script written in \PipeFlow language and identifying the dataflow graph instances from this script. Initially, ANTLR generates a parser for the \PipeFlow language automatically based on the \PipeFlow grammar. This parser creates a parse tree which is the data structures representing how the grammar matches the \PipeFlow script. Additionally, ANTLR automatically generates a tree walker interface which can be used to visit the nodes of the parse tree. A new listener interface, which implements the parent interface, is used to visit the nodes in order to construct the corresponding flow node instances during the tree traversal. From these flow node instances, the flow graph object can be created.
\item[Flow graph: ] A logical representation of the input program which comprises nodes (flow nodes) and edges (pipes). A flow node instance represents an operator in the data-flow program. It is the intermediate mean between the parser and the code generator to build up a graph of nodes and generate the target code, respectively. Pipe instance represents the edge between two nodes of the dataflow graph. Therefore, each pipe contains the input node (the node producing tuples which are sent via the pipe) and the output node (the node consuming these tuples from the pipe). This component is mainly used to generate the target code in a specific language. And Irrespective of target language program to be generated this component remains same.
\item[Flow graph: ] A logical representation of the input script which comprises nodes (flow nodes) and edges (pipes). A flow node instance represents an operator in the data-flow script. It is the intermediate mean between the parser and the code generator to build up a graph of nodes and generate the target code, respectively. Pipe instance represents the edge between two nodes of the dataflow graph. Therefore, each pipe contains the input node (the node producing tuples which are sent via the pipe) and the output node (the node consuming these tuples from the pipe). This component is mainly used to generate the target code for a specific engine. And Irrespective of target program to be generated this component remains same.
\item[Code generator:] Once the flow graph is generated from the input \PipeFlow source program, the code generator generates the target code based on this graph. The code generator takes the flow graph generated by the parser and a template file, processes all nodes and pipes iteratively, and creates the equivalent code using the StringTemplate library. There are separate code generators for each specific target language depending on the target code to be generated.
\item[Template file:] It is also defined as a string template group file (stg). We can imagine this file as string with holes which has to be filled by the code generator. Inside this file, the rules with their arguments should be defined to specify how to format the operators code and which part in it has to render. Therefore, some parts will be rendered as it is whereas other parts contain place-holders where they will be replaced with the provided arguments. Template file is different for each specific target code to be generated as the format and syntax of each target language to be generated is different. The template file contains the library files to be included, packages to be imported, or the code blocks for operators, etc.
\item[Code generator:] Once the flow graph is generated from the input \PipeFlow source script, the code generator generates the target code based on this graph. The code generator takes the flow graph generated by the parser and a template file, processes all nodes and pipes iteratively, and creates the equivalent code using the StringTemplate library. As previously mentioned, there are separate code generators for each target engine.
\item[Template file:] It is also defined as a string template group file (stg). We can imagine this file as string with holes which has to be filled by the code generator. Inside this file, the rules with their arguments should be defined to specify how to format the operators code and which part in it has to render. Therefore, some parts will be rendered as it is whereas other parts contain place-holders where they will be replaced with the provided arguments. Template file is different for each specific target code to be generated as the format and syntax of each target program to be generated is different. The template file contains the library files to be included, packages to be imported, or the code blocks for operators, etc.
\end{description}
\subsection{An Example of Translation}
Consider below a simple\footnote{Because of the limitation in the number of pages.} sample script written in \PipeFlow that needs to be translated to PipeFabric and Storm by our system. This script reads a stream that contains \texttt{x} and \texttt{y} fields. Later, the \texttt{x} field is filtered and aggregated to find the sum of all \texttt{y} fields for a particular \texttt{x}. Note that the \PipeFlow construct is simpler than other engine constructs.
Consider below a simple\footnote{Because of the limitation in the number of pages.} sample script written in \PipeFlow that needs to be translated to PipeFabric and Storm programs by our system. This script reads a stream that contains \texttt{x} and \texttt{y} fields. Later, the \texttt{x} field is filtered and aggregated to find the sum of all \texttt{y} fields for a particular \texttt{x}. Note that the \PipeFlow construct is simpler than other engine constructs.
\begin{alltt}
\$in := file_source() using (filename =
"input.csv") with (x int, y int);
......@@ -81,7 +81,7 @@ Consider below a simple\footnote{Because of the limitation in the number of page
\$2 := aggregate(\$1) on x
generate x, sum(y) as counter;
\end{alltt}
The following represents the most important parts in Storm and PipeFabric codes, respectively. Both codes have the same functionalities but in different languages and engines.
The following represents the most important parts in Storm and PipeFabric codes, respectively. Both codes have the same functionalities but in different syntax and engines.
\begin{lstlisting}[caption=Generated Storm code, label="storm"]
public class MyFilter extends BaseFilter {
public boolean isKeep(TridentTuple tuple) {
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment