Commit d4229da4 authored by Omran Saleh's avatar Omran Saleh
Browse files

minor changes

parents 76434f24 cd2a1e4a
To provide a more abstraction level, we have developed a web application called \textbf{AutoStudio}. It is a very user friendly and easy to use web application to run cross-platform using HTML5, Draw2D touch\footnote{\url{http://www.draw2d.org/draw2d/index.html}}, and node.js\footnote{\url{http://www.nodejs.org}}. AutoStudio has several functionalities:
To provide a higher abstraction level, we have developed a Web application called \textbf{AutoStudio}. It is a user friendly and easy to use browser-based application to run cross-platform using HTML5, Draw2D touch\footnote{\url{http://www.draw2d.org/draw2d/index.html}}, and node.js\footnote{\url{http://www.nodejs.org}}. AutoStudio has several functionalities:
\begin{itemize}
\item It enables users to leverage the emerging \PipeFlow language graphically via a collection of operators (represented by icons) which could be simply ``dragged and dropped'' onto a drawing canvas. The user can assemble the operators in order to create a dataflow graph in a logical way and visually show how they are related, and from this graph, equivalent \PipeFlow script can be generated. By clicking on the operator icon, a pop-up window appears to let the user specify the parameters of \PipeFlow operators, which are required. Moreover, the user can display the help contents for each operator.
\item Contacting the \PipeFlow system to generate the right script (e.g., Storm, Spark, or PipeFabric scripts) based upon the user's selection of language from the dashboard page. This makes the user to be not aware of any of stream-processing languages syntax including \PipeFlow and their constructs. By this application, the user can trigger the execution of the script through the \PipeFlow system via calling the respective engine. Moreover, it handles real-time stats including execution and performance results sent by \PipeFlow system when the script is in execution. When the execution is complete, the application can send an email to the user.
......
A wide range of (near) real-time applications process stream-based data including
financial data analysis, traffic management, telecommunication monitoring, environmental monitoring, the smart grid, weather forecasting, and social media analysis, etc. These applications focus mainly on finding useful information and patterns on-the-fly as well as deriving valuable higher-level information from lower-level ones from continuously incoming data stream to report and monitor the progress of some activities. In the last few years, several systems for processing streams of information, where each offering their own processing solution, have been proposed. It is pioneered by academic systems such as Aurora and Borealis~\cite{Abadi:2003:ANM:950481.950485} and commercial systems like IBM InfoSphere Streams or StreamBase. Recently, some novel distributed stream computing platforms have been developed based on data parallelization approaches, which try to support scalable operation in cluster environments for processing massive data streams. Examples of these platforms Storm~\cite{storm}, Spark ~\cite{spark}, and Flink~\cite{flink}. Though, these engines (SPE) provide abstractions for processing (possibly) infinite streams of data, they lack support for higher-level declarative languages. Some of these engines provide only a programming interface where operators and topologies have to be implemented in a programming language like Java or Scala. Moreover, to build a particular program (i.e., query) in these systems, the users should be expert and should have a deeper knowledge of the syntax and programming constructs of the language, especially, if the system supports multiple languages. Therefore, no time and effort savings can be achieved as the user needs to proceed by writing each programming statements correctly. To make the life much easier, the current trend in data analytics should be the adopting of the "Write once, run anywhere" slogan. This is a slogan first-mentioned by Sun Microsystems to illustrate that the Java code can be developed on any platform and be expected to run on any platform equipped with a java virtual machine (JVM). In general, the development of various stream processing engines raises the question whether we can provide an unified programming model or a standard language where the user can write one steam-processing script and he/she expects to execute this script on any stream-processing engines. By bringing all these things together, we provide a demonstration of our solution called \PipeFlow. In our \PipeFlow system, we address the following issues:
financial data analysis, traffic management, telecommunication monitoring, environmental monitoring, the smart grid, weather forecasting, and social media analysis, etc. These applications focus mainly on finding useful information and patterns on-the-fly as well as deriving valuable higher-level information from lower-level ones from continuously incoming data stream to report and monitor the progress of some activities. In the last few years, several systems for processing streams of information, where each offering their own processing solution, have been proposed. It is pioneered by academic systems such as Aurora and Borealis~\cite{Abadi:2003:ANM:950481.950485} and commercial systems like IBM InfoSphere Streams or StreamBase. Recently, some novel distributed stream computing platforms have been developed based on data parallelization approaches, which try to support scalable operation in cluster environments for processing massive data streams. Examples of these platforms Storm~\cite{storm}, Spark Streaming ~\cite{spark}, and Flink~\cite{flink}. Though, these engines (SPE) provide abstractions for processing (possibly) infinite streams of data, they lack support for higher-level declarative languages. Some of these engines provide only a programming interface where operators and topologies have to be implemented in a programming language like Java or Scala. Moreover, to build a particular program (i.e., query) in these systems, the users should be expert and should have a deeper knowledge of the syntax and programming constructs of the language, especially, if the system supports multiple languages. Therefore, no time and effort savings can be achieved as the user needs to proceed by writing each programming statements correctly. To make the life much easier, the current trend in data analytics should be the adopting of the "Write once, run anywhere" slogan. This is a slogan first-mentioned by Sun Microsystems to illustrate that the Java code can be developed on any platform and be expected to run on any platform equipped with a java virtual machine (JVM). In general, the development of various stream processing engines raises the question whether we can provide an unified programming model or a standard language where the user can write one steam-processing script and he/she expects to execute this script on any stream-processing engines. By bringing all these things together, we provide a demonstration of our solution called \PipeFlow. In our \PipeFlow system, we address the following issues:
\begin{itemize}
\item Developing a scripting language that provides most of the features of stream-processing scripting languages, e.g., Storm and Spark. Therefore, we have chosen a dataflow language called \PipeFlow. At the beginning, this language was intended to be used in conjunction with a stream processing engine called PipeFabric \cite{DBIS:SalBetSat14year2014,DBIS:SalSat14}. Later, it is extended to be used with other engines. The source script written in the \PipeFlow language is parsed and compiled and then a target program (i.e., for Spark, Storm, or PipeFabric) is generated based upon user's selection. This target script is equivalent in its functionalities to the original \PipeFlow program. Once the target program is generated, the user can execute this program in the specific engine.
\item Mapping or translating a \PipeFlow script into other scripts necessitates the existing of each operator in \PipeFlow to be implemented in the target engine. Since \PipeFlow contains a set of pref-defined operators, all of these operators have been implemented directly or indirectly in that engine.
\item Developing a scripting language that provides most of the features of stream-processing scripting languages, e.g., Storm and Spark Streaming. Therefore, we have chosen a dataflow language called \PipeFlow. At the beginning, this language was intended to be used in conjunction with a stream processing engine called PipeFabric \cite{DBIS:SalBetSat14year2014,DBIS:SalSat14}. Later, it is extended to be used with other engines. The source script written in \PipeFlow language is parsed and compiled and then a target program (i.e., for Spark Streaming and Storm as well as PipeFabric) is generated based upon user's selection. This target script is equivalent in its functionalities to the original \PipeFlow program. Once the target program is generated, the user can execute this program in the specific engine.
\item Mapping or translating a \PipeFlow script into other scripts necessitates the existing of each operator in \PipeFlow to be implemented in the target engine. Since \PipeFlow contains a set of predefined operators, all of these operators have been implemented directly or indirectly in that engine.
\item Providing a flexible architecture for users for extending the system by supporting more engines as well as new operators. These extensions should be integrated in the system smoothly.
\item Developing a front-end web application to enable users who have little experience in the \PipeFlow language to express the script or the program and its associated processing algorithm and data pipeline graphically.
\end{itemize}
One of the useful applications for this approach is helping the developers to evaluate various stream processing systems. Instead of writing several scripts, which should perform the same task, for different systems manually, writing a single script in our approach will help to get the same result faster and more efficiently.
The remainder of the paper is structured as follows: In Sect.~\ref{sec:pipeflow}, we introduce the \PipeFlow language, the system architecture, and an example for the translation between scripts. Next, in Sect.~\ref{sec:app}, we describe our front-end application and give details about its design and provided functionalities. Finally, a description of the planned demonstration is discussed in Sect.~\ref{sec:demo}.
\ No newline at end of file
The remainder of the paper is structured as follows: In Sect.~\ref{sec:pipeflow}, we introduce the \PipeFlow language, the system architecture, and an example for the translation between scripts. Next, in Sect.~\ref{sec:app}, we describe our front-end application and give details about its design and provided functionalities. Finally, the planned demonstration is described in Sect.~\ref{sec:demo}.
......@@ -66,7 +66,8 @@ Omran Saleh\\
\maketitle
\begin{abstract}
Recently, some distributed stream computing platforms have been developed for processing massive data streams such as Storm and Spark. However, these platforms lack support for higher-level declarative languages and provide only a programming interface. Moreover, the users should be well versed of the syntax and programming constructs of each language in these platforms. In this paper, we are going to demonstrate our \PipeFlow system. In \PipeFlow system, the user can write a stream-processing script (i.e., query) using a higher-level dataflow language. This script can be translated to different stream-processing scripts written in different languages that run in the corresponding engines. In this case, the user is only willing to know a single language, thus, he/she can write one steam-processing script and expects to execute this script on different engines.
Recently, some distributed stream computing platforms have been
developed for processing massive data streams such as Storm and Spark Streaming. However, these platforms lack support for higher-level declarative languages and provide only a programming interface. Moreover, the users should be well versed of the syntax and programming constructs of each language in these platforms. In this paper, we are going to demonstrate our \PipeFlow system. In \PipeFlow system, the user can write a stream-processing script (i.e., query) using a higher-level dataflow language. This script can be translated to different stream-processing scripts written in different languages that run in the corresponding engines. In this case, the user is only willing to know a single language, thus, he/she can write one steam-processing script and expects to execute this script on different engines.
\end{abstract}
......
......@@ -2,7 +2,7 @@ In this section we provide a description of our \PipeFlow language and the syste
\subsection{\PipeFlow Language}
\PipeFlow language is a dataflow language inspired by Hadoop's Pig Latin \cite{Olston2008}. In general, a \PipeFlow script describes a directed acyclic graph of dataflow operators, which are connected by named pipes. A single statement is given by:
\begin{alltt}
\$out := op(\$in1, \$in2, \dots) ... clause ... clause;
\$out := op(\$in1, \$in2, \dots) ... clause ...;
\end{alltt}
where \texttt{\$out} denotes a pipe variable referring to the typed output stream of operator \texttt{op} and \texttt{\$in\emph{i}} refers to input streams. By using the output pipe of one operator as input pipe of another operator, a dataflow graph is formed. Dataflow operators can be further parametrized by the following clauses described below.
\begin{itemize}
......@@ -19,7 +19,8 @@ where \texttt{\$res} is the output pipe to which the filter operator publishes i
\vspace*{-0.5cm}
\begin{alltt}
\begin{center}
\$out := file_source() using (filename = "input.csv") with (x int, y int, z string);
\$out := file_source() using (filename =
"input.csv") with (x int, y int, z string);
\end{center}
\end{alltt}
\vspace*{-0.5cm}
......@@ -74,9 +75,11 @@ In our approach, we have adopted the automated code translation (ACT) technique
\subsection{An Example of Translation}
Consider below a simple\footnote{Because of the limitation in the number of pages.} sample script written in \PipeFlow that needs to be translated to PipeFabric and Storm by our system. This script reads a stream that contains \texttt{x} and \texttt{y} fields. Later, the \texttt{x} field is filtered and aggregated to find the sum of all \texttt{y} fields for a particular \texttt{x}. Note that the \PipeFlow construct is simpler than other engine constructs.
\begin{alltt}
\$in := file_source() using (filename = "input.csv") with (x int, y int);
\$in := file_source() using (filename =
"input.csv") with (x int, y int);
\$1 := filter(\$in) by x > 18;
\$2 := aggregate(\$1) on x generate x, sum(y) as counter;
\$2 := aggregate(\$1) on x
generate x, sum(y) as counter;
\end{alltt}
The following represents the most important parts in Storm and PipeFabric codes, respectively. Both codes have the same functionalities but in different languages and engines.
\begin{lstlisting}[caption=Generated Storm code, label="storm"]
......
Supports Markdown
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment