Skip to content

Commit

Permalink
Prepare for release 0.3.8 (#215)
Browse files Browse the repository at this point in the history
* Remove unused code (#141)


* README: update to 0.3.3

* README: fix javadoc badge

* remove unused param

* [sbt] version updates

* [sbt] disable build for scala 2.12

* [conf] allow not_analyzed string fields (#145)

* [not-analyzed-fields] do not analyzed fields ending with _notanalyzed

* [sbt] version updates

* [sbt] disable build for scala 2.12

* [conf] allow not_analyzed string fields (#145)

* [not-analyzed-fields] do not analyzed fields ending with _notanalyzed

* [sbt] version updates

* [sbt] disable build for scala 2.12

* [conf] allow not_analyzed string fields (#145)

* [not-analyzed-fields] do not analyzed fields ending with _notanalyzed

* Revert "Revert "Setting version to 0.3.5-SNAPSHOT""

This reverts commit a6da0af.

* [build] update Lucene to 7.7.0

* Hotfix: issue 150 (#151)

* Remove unused code (#141)

* Revert "Setting version to 0.3.4-SNAPSHOT"

This reverts commit 2f1d7be.

* README: update to 0.3.3

* README: fix javadoc badge

* remove unused param

* [sbt] version updates

* [conf] allow not_analyzed string fields (#145)

* [not-analyzed-fields] do not analyzed fields ending with _notanalyzed

* [hotfix] fixes issue 150

* [tests] issue 150

* fix typo

* [blockEntityLinkage] drop queryPartColumns

* [sbt] version updates

* [scripts] fix shell

* Block linkage: allow a block linker with Row to  Query (#154)

* [linkage] block linker with => Query

* [linkage] block linker is Row => Query

* remove Query analyzer on methods

* [sbt] set version to 0.3.6-SNAPSHOT

* Feature: allow custom analyzers during compile time (#160)

* [analyzers] custom analyzer

* test return null

* [travis] travis_wait 1 min

* Revert "[travis] travis_wait 1 min"

This reverts commit c79456e.

* use lucene examples

* custom analyzer return null

* fix java reflection

* add docs

* Update to Lucene 8 (#161)

* [lucene] upgrade to version 8.0.0

* [lucene] remove ngram analyzer

* delete ngram analyzer

* minor fix

* add scaladoc

* LuceneRDDResponseSpec.collect() should work when no results are found - Issue #166 (#168)

* [sbt] update scalatest 3.0.7

* [sbt] update spark 2.4.1

* [build.sbt] add credentials file

* [plugins] update versions

* [sbt] update to 0.13.18

* Allow Lucene Analyzers per field (#164)

* [issue_163] per field analysis

* [sbt] update scalatest to 3.0.7

* [issue_163] fix docs; order of arguments

* fixes on ShapeLuceneRDD

* [issue_163] fix test

* issue_163: minor fix

* introduce LuceneRDDParams case class

* fix apply in LuceneRDDParams

* [issue_163] remove duplicate apply defn

* add extra LuceneRDD.apply

* [issue_165] throw runtime exception; use traversable trait (#170)

[issue_165] throw runtime exception; handle multi-valued fields in DataFrames

* [config] refactor; add environment variables in config (#173)

* [refactor] configuration loading

* [travis] code hygiene

* Make LuceneRDDResponse to extend RDD[Row] (#175)

* WIP

* fix tests

* remove SparkDoc class

* make test compile

* use GenericRowWithSchema

* tests: getDouble score

* score is a float

* fix casting issue with Seq[String]

* tests: LuceneDocToSparkRowpec

* tests: LuceneDocToSparkRowpec

* more tests

* LuceneDocToSparkRowpec: more tests

* LuceneDocToSparkRowpec: fix tests

* LuceneDocToSparkRow: fix Number type inference

* LuceneDocToSparkRowpec: fix tests

* implicits: remove StoredField for Numeric types

* implicits: revert remove StoredField for Numeric types

* fix more tests

* fix more tests

* [tests] fix LuceneRDDResponse .toDF()

* fix multivalued fields

* fix score type issue

* minor

* stored fields for numerics

* hotfix: TextField must be stored using StoredField

* hotfix: stringToDocument implicit

* link issue 179

* fix tests

* remove _.toRow() calls

* fix compile issue

* [sbt] update to spark 2.4.2

* [travis] use spark 2.4.2

* [build] minor updates

* Remove sbt-spark-package plugin (#181)

* [sbt] remove sbt-spark-package

* WIP

* [sbt] add spark-mllib

* [sbt] make spark provided

* update to sbt to 1.X.X (#182)

* [wip] update to sbt 1.X.X

* [travis] fix script

* [sbt] update to 1.2.8

* [sbt] update all plugins

* [sbt] spark update v2.4.3 (#183)

* [sbt] spark update v2.4.3

* minor update joda-time

* [sbt] update spark-testing

* [sbt] lucene 8.1.0 update (#184)

* [sbt] lucene update 8.1.1 (#185)

* [scalatest] update to 3.0.8

* [sbt] joda-time patch update

* [release-info] add sonatype credentials

* [sbt] lucene 8.2.0 update (#187)

* [sbt] update plugins

* [sbt] update spark 2.4.4 (#188)

* [sbt] update joda to 2.10.4

* [sbt] update joda / typesafe config (#189)

* [sbt] update Lucene 8.3.0 (#191)

* [sbt] version updates (#194)

* Update Lucene to version 8.3.1
* Update Twitter algebird to version 0.13.6
* Update scalatest/scalactic to version 3.1.0

* [github-actions] add scala.yml (#193)

* [github-actions] add scala.yml

* [sbt] update to version 1.3.3 (#195)

* [plugins] update sbt plugins (#196)

* [lucene] update version 8.4.0 (#197)

* fix version to SNAPSHOT

* code hygiene
  • Loading branch information
zouzias authored Dec 30, 2019
1 parent 2b543f2 commit c01d78d
Show file tree
Hide file tree
Showing 8 changed files with 68 additions and 53 deletions.
17 changes: 17 additions & 0 deletions .github/workflows/scala.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
name: Scala CI

on: [push]

jobs:
build:

runs-on: ubuntu-latest

steps:
- uses: actions/checkout@v1
- name: Set up JDK 1.8
uses: actions/setup-java@v1
with:
java-version: 1.8
- name: Run tests
run: sbt test
11 changes: 5 additions & 6 deletions .travis.yml
Original file line number Diff line number Diff line change
Expand Up @@ -17,16 +17,15 @@ before_cache:
matrix:
include:
- jdk: oraclejdk8
env: TEST_SPARK_VERSION="2.4.2" LUCENERDD_ANALYZER_NAME="en" LUCENERDD_LINKER_METHOD="cartesian"
env: LUCENERDD_ANALYZER_NAME="en" LUCENERDD_LINKER_METHOD="cartesian"
- jdk: openjdk8
env: TEST_SPARK_VERSION="2.4.2" LUCENERDD_ANALYZER_NAME="en" LUCENERDD_LINKER_METHOD="collectbroadcast"
env: LUCENERDD_ANALYZER_NAME="en" LUCENERDD_LINKER_METHOD="collectbroadcast"
- jdk: openjdk8
env: TEST_SPARK_VERSION="2.4.2" LUCENERDD_ANALYZER_NAME="whitespace" LUCENERDD_LINKER_METHOD="cartesian"
env: LUCENERDD_ANALYZER_NAME="whitespace" LUCENERDD_LINKER_METHOD="cartesian"
- jdk: oraclejdk8
env: TEST_SPARK_VERSION="2.4.2" LUCENERDD_ANALYZER_NAME="whitespace" LUCENERDD_LINKER_METHOD="collectbroadcast"
env: LUCENERDD_ANALYZER_NAME="whitespace" LUCENERDD_LINKER_METHOD="collectbroadcast"
script:
- sbt ++$TRAVIS_SCALA_VERSION clean update
-Dlucenerdd.spatial.linker.method=${LUCENE_SPATIAL_LINKER_METHOD} -test
- sbt ++$TRAVIS_SCALA_VERSION -Dlucenerdd.spatial.linker.method=${LUCENE_SPATIAL_LINKER_METHOD} clean update test
- sbt ++$TRAVIS_SCALA_VERSION scalastyle
- sbt ++$TRAVIS_SCALA_VERSION assembly
- travis_wait 30 sbt ++$TRAVIS_SCALA_VERSION clean coverage test coverageReport
Expand Down
35 changes: 12 additions & 23 deletions build.sbt
Original file line number Diff line number Diff line change
Expand Up @@ -79,32 +79,20 @@ pomExtra := <scm>

credentials += Credentials(Path.userHome / ".sbt" / ".credentials")

val luceneV = "8.0.0"

spName := "zouzias/spark-lucenerdd"
sparkVersion := "2.4.2"
spShortDescription := "Spark RDD with Lucene's query capabilities"
sparkComponents ++= Seq("core", "sql", "mllib")
spAppendScalaVersion := true
// This is necessary because of how we explicitly specify Spark dependencies
// for tests rather than using the sbt-spark-package plugin to provide them.
spIgnoreProvided := true

val testSparkVersion = settingKey[String]("The version of Spark to test against.")

testSparkVersion := sys.props.get("spark.testVersion").getOrElse(sparkVersion.value)
val luceneV = "8.4.0"
val sparkVersion = "2.4.4"


// scalastyle:off
val scalactic = "org.scalactic" %% "scalactic" % "3.0.7"
val scalatest = "org.scalatest" %% "scalatest" % "3.0.7" % "test"
val scalactic = "org.scalactic" %% "scalactic" % "3.1.0"
val scalatest = "org.scalatest" %% "scalatest" % "3.1.0" % "test"

val joda_time = "joda-time" % "joda-time" % "2.10.1"
val algebird = "com.twitter" %% "algebird-core" % "0.13.5"
val joda_convert = "org.joda" % "joda-convert" % "2.2.0"
val joda_time = "joda-time" % "joda-time" % "2.10.5"
val algebird = "com.twitter" %% "algebird-core" % "0.13.6"
val joda_convert = "org.joda" % "joda-convert" % "2.2.1"
val spatial4j = "org.locationtech.spatial4j" % "spatial4j" % "0.7"

val typesafe_config = "com.typesafe" % "config" % "1.3.3"
val typesafe_config = "com.typesafe" % "config" % "1.3.4"

val lucene_facet = "org.apache.lucene" % "lucene-facet" % luceneV
val lucene_analyzers = "org.apache.lucene" % "lucene-analyzers-common" % luceneV
Expand Down Expand Up @@ -135,9 +123,10 @@ libraryDependencies ++= Seq(
)

libraryDependencies ++= Seq(
"org.apache.spark" %% "spark-core" % testSparkVersion.value % "test" force(),
"org.apache.spark" %% "spark-sql" % testSparkVersion.value % "test" force(),
"com.holdenkarau" %% "spark-testing-base" % s"2.4.0_0.11.0" % "test" intransitive(),
"org.apache.spark" %% "spark-core" % sparkVersion % "provided",
"org.apache.spark" %% "spark-sql" % sparkVersion % "provided",
"org.apache.spark" %% "spark-mllib" % sparkVersion % "provided",
"com.holdenkarau" %% "spark-testing-base" % s"2.4.3_0.12.0" % "test" intransitive(),
"org.scala-lang" % "scala-library" % scalaVersion.value % "compile"
)

Expand Down
25 changes: 25 additions & 0 deletions deployToSonartype.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@
## Setup

# Add `sonatype.sbt` file under `~/.sbt/1.0/` folder with contents

```
credentials += Credentials("Sonatype Nexus Repository Manager",
"oss.sonatype.org",
"zouzias",
"PASSWORD_HERE")
```

## Run sbt release to release signed both 2.10 and 2.11

```
sbt release
```

## Then, git checkout v0.X.X to the release tag first, and then type

```
sbt sonatypeRelease
```

## This will allow sonatype to release the artifacts to maven central.
## An alternative is to browse to https://oss.sonatype.org and do it manually
13 changes: 0 additions & 13 deletions deployToSonartype.sh

This file was deleted.

2 changes: 1 addition & 1 deletion project/build.properties
Original file line number Diff line number Diff line change
@@ -1 +1 @@
sbt.version=0.13.18
sbt.version=1.3.6
16 changes: 7 additions & 9 deletions project/plugins.sbt
Original file line number Diff line number Diff line change
Expand Up @@ -17,22 +17,20 @@

resolvers += "bintray-spark-packages" at "https://dl.bintray.com/spark-packages/maven/"

addSbtPlugin("com.eed3si9n" % "sbt-buildinfo" % "0.7.0")
addSbtPlugin("com.eed3si9n" % "sbt-buildinfo" % "0.9.0")

addSbtPlugin("com.timushev.sbt" % "sbt-updates" % "0.4.0")
addSbtPlugin("com.timushev.sbt" % "sbt-updates" % "0.5.0")

addSbtPlugin("com.eed3si9n" % "sbt-assembly" % "0.14.9")
addSbtPlugin("com.eed3si9n" % "sbt-assembly" % "0.14.10")

addSbtPlugin("com.github.gseitz" % "sbt-release" % "1.0.11")
addSbtPlugin("com.github.gseitz" % "sbt-release" % "1.0.12")

addSbtPlugin("org.scalastyle" %% "scalastyle-sbt-plugin" % "1.0.0")

addSbtPlugin("org.scoverage" % "sbt-scoverage" % "1.5.1")

addSbtPlugin("org.scoverage" % "sbt-coveralls" % "1.2.6")
addSbtPlugin("org.scoverage" % "sbt-coveralls" % "1.2.7")

addSbtPlugin("org.xerial.sbt" % "sbt-sonatype" % "1.1")
addSbtPlugin("org.xerial.sbt" % "sbt-sonatype" % "2.5")

addSbtPlugin("com.jsuereth" % "sbt-pgp" % "1.1.1")

addSbtPlugin("org.spark-packages" % "sbt-spark-package" % "0.2.6")
addSbtPlugin("com.jsuereth" % "sbt-pgp" % "1.1.2")
2 changes: 1 addition & 1 deletion spark-shell.sh
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ CURRENT_DIR=`pwd`
SPARK_LUCENERDD_VERSION=`cat version.sbt | awk '{print $5}' | xargs`

# You should have downloaded this spark version under your ${HOME}
SPARK_VERSION="2.4.0"
SPARK_VERSION="2.4.4"

echo "==============================================="
echo "Loading LuceneRDD with version ${SPARK_LUCENERDD_VERSION}"
Expand Down

0 comments on commit c01d78d

Please sign in to comment.