Supported Sql: SAS and Hive
To convert most of the sql queries to Spark DataFrames(scala) and reduce copy paste work of developers while doing the same.
Spark DataFrames provide high performance and reduce cost. So, I converted Many SQL(Hive and SAS) scripts to spark DataFrames. During conversion of sql to DataFrames I observed a similar pattern in every sql conversion. I created this project which converts SQL to DataFrame.
It converts complex nested SQL, arithmetic expressions and nested functions, to spark DataFrame's. Sample Example is provided below.
Advantages of DataFrame over sql:-
-
Advantage to write UDF and UDAF using FP's of scala.
-
Better type safe than sql.
-
Using functional programming API we can define filters or Select statement at runtime.
-
Splitting complex sql to small data frames used for optimizations.
-
Decrease load over meta store.
CREATE TABLE STUDENT_TEMP AS
SELECT
DISTINCT student_id AS id,
weight||'_'||(CASE
WHEN WEIGHT BETWEEN 0 AND 50 THEN 'LOW'
WHEN WEIGHT BETWEEN 51 AND 70 THEN 'MEDIUM'
WHEN WEIGHT BETWEEN 71 AND 100 THEN 'HIGH'
ELSE 'VERY HIGH'
END) AS NEWWEIGHT
FROM
STUDENT;
val STUDENT_TEMP = student.
select($"student_id".as("id"),concat($"weight",lit("_"),
when($"weight".between(lit(0),lit(50)),lit("low")).
otherwise(when($"weight".between(lit(51),lit(70)),lit("medium")).
otherwise(when($"weight".between(lit(71),lit(100)),lit("high")).
otherwise(lit("very high"))))).as("newweight")).
distinct
CREATE TABLE STUDENT_TEMP AS
SELECT
DISTINCT student_id AS id,
weight||'_'||(CASE
WHEN WEIGHT BETWEEN 0 AND 50 THEN 'LOW'
WHEN WEIGHT BETWEEN 51 AND 70 THEN 'MEDIUM'
WHEN WEIGHT BETWEEN 71 AND 100 THEN 'HIGH'
ELSE 'VERY HIGH'
END) AS NEWWEIGHT
FROM
(SELECT * FROM STUDENT WHERE student_id is not null);
val STUDENT_TEMP_a = student.
filter($"STUDENT_ID".isNotNull).
select($"*")
val STUDENT_TEMP = STUDENT_TEMP_a.as("a").
select($"student_id".as("id"),concat($"weight",lit("_"),when($"weight".between(lit(0),lit(50)),lit("low")).
otherwise(when($"weight".between(lit(51),lit(70)),lit("medium")).
otherwise(when($"weight".between(lit(71),lit(100)),lit("high")).
otherwise(lit("very high"))))).as("newweight")).
distinct
-
Clone the code from git
-
Edit the properties file in Config.scala(No extra config required to differentiate hive or sas)(configs kept in scala file as user will be Data Engineer)
-
Edit SasSqlToSparkSql.scala and SqltoDf.scala to iterate each sql according to your input.
-
Run the MainRun.scala class
-
Add jar to the class path.
-
Instantiate io.github.mvamsichaitanya.codeconversion.sqltodataframe.DataFrame by passing select statement to it.
-
toString method will return DataFrame code
- No Create table statements should be present as below in the input file.
Create table table_1(column_1 int,
column_2 bigint,
column_3 string);
-
No Union all Queries
-
It will not convert to DataFrames with 100% accuracy. Developer should validate with SQL after conversion.