i am trying to compile some code to test out, and i am recieving some errors
package :
org.apache.poi.hwpf.usermodel
org.apache.poi.hwpf.extractor
org.apache.poi.hwpf
does not exsist
does anyone know where i can find these packages ?
its to complie a simple piece of code that should just allow the conversion of a docx file to a pdf file
The Word 2007 XML formats are in xwpf as hwpf is for the older versions of Word, for example usermodel is org.apache.poi.xwpf.usermodel
The jar for this is under poi-ooxml and currently there is a copy on the maven repo1 at: http://repo1.maven.org/maven2/org/apache/poi/poi-ooxml/3.9/
I'm using SBT with these dependancies:
// Add multiple dependencies
libraryDependencies ++= Seq(
"org.apache.poi" % "poi" % "3.9" % "compile->default",
"org.apache.poi" % "poi-ooxml" % "3.9" % "compile->default",
"org.apache.poi" % "poi-ooxml-schemas" % "3.9" % "compile->default",
"org.mortbay.jetty" % "jetty" % "6.1.22" % "test->default",
"junit" % "junit" % "4.5" % "test->default",
"org.scalatest" %% "scalatest" % "1.6.1" % "test->default"
)
Related
I am trying to run the following code. It looks like a dependency issue to me mostly.
Dataset<Row> ds = spark.read().parquet("hdfs://localhost:9000/test/arxiv.parquet");
I am getting the following error :
Exception in thread "main" java.io.IOException: com.google.protobuf.ServiceException: java.lang.NoSuchFieldError: PARSER
at org.apache.hadoop.ipc.ProtobufHelper.getRemoteException(ProtobufHelper.java:71)
I have added dependency of apache hadoop common.
Can someone point out possible problem with the code ?
First post, I don't know if I'm doing this right, though had the same problem, and I added a bunch of dependencies till it solves the problem. I'll give you the list I don't really know which ones are really needed :
libraryDependencies += "org.apache.parquet" % "parquet-hadoop" % "1.11.0"
libraryDependencies += "org.apache.parquet" % "parquet-avro" % "1.11.0"
libraryDependencies += "org.apache.parquet" % "parquet-encoding" % "1.11.0"
libraryDependencies += "org.apache.parquet" % "parquet-column" % "1.11.0"
libraryDependencies += "org.apache.parquet" % "parquet-common" % "1.11.0"
libraryDependencies += "org.apache.parquet" %% "parquet-scala" % "1.11.0"
libraryDependencies += "org.apache.parquet" % "parquet-hadoop-bundle" % "1.11.0"
libraryDependencies += "org.apache.hadoop" % "hadoop-hdfs" % "3.3.1" % Test
libraryDependencies += "org.apache.hadoop" % "hadoop-client" % "3.3.1"
Following up on the answer given by MinoS03 the required dependencies are the following:
hadoop-common
hadoop-hdfs-client
The Issue is with the below Error,
[error] at scala.tools.nsc.typechecker.Typers$Typer.typedApply$1(Typers.scala:4580)
[error] at scala.tools.nsc.typechecker.Typers$Typer.typedInAnyMode$1(Typers.scala:5343)
[error] at scala.tools.nsc.typechecker.Typers$Typer.typed1(Typers.scala:5360)
[error] at scala.tools.nsc.typechecker.Typers$Typer.runTyper$1(Typers.scala:5396)
[error] (Compile / compileIncremental) java.lang.StackOverflowError
[error] Total time: 11 s, completed Apr 25, 2019 7:11:28 PM
also tried to increase the jmx parameters
javaOptions ++= Seq("-Xms512M", "-Xmx4048M", "-XX:MaxPermSize=4048M", "-XX:+CMSClassUnloadingEnabled") but it didn't help. All the dependencies seems to resolve properly but this Error is kind of Struck.
build.properties
sbt.version=1.2.8
plugin.sbt
addSbtPlugin("com.typesafe.sbteclipse" % "sbteclipse-plugin" % "5.2.4")
addSbtPlugin("org.scoverage" % "sbt-scoverage" % "1.5.1")
addSbtPlugin("com.eed3si9n" % "sbt-assembly" % "0.14.9")
And the build.sbt
name := "ProjectNew"
version := "4.0"
scalaVersion := "2.11.8"
fork := true
libraryDependencies ++= Seq(
"org.scalaz" %% "scalaz-core" % "7.1.0" % "test",
("org.apache.spark" %% "spark-core" % "2.1.0.cloudera1").
exclude("org.mortbay.jetty", "servlet-api").
exclude("commons-beanutils", "commons-beanutils-core").
//exclude("commons-collections", "commons-collections").
exclude("com.esotericsoftware.minlog", "minlog").
//exclude("org.apache.hadooop","hadoop-client").
exclude("commons-logging", "commons-logging") % "provided",
("org.apache.spark" %% "spark-sql" % "2.1.0.cloudera1")
.exclude("com.esotericsoftware.minlog","minlog")
//.exclude("org.apache.hadoop","hadoop-client")
% "provided",
("org.apache.spark" %% "spark-hive" % "2.1.0.cloudera1")
.exclude("com.esotericsoftware.minlog","minlog")
//.exclude("org.apache.hadoop","hadoop-client")
% "provided",
"spark.jobserver" % "job-server-api" % "0.4.0",
"org.scalatest" %%"scalatest" % "2.2.4" % "test",
"com.github.nscala-time" %% "nscala-time" % "1.6.0"
)
//libraryDependencies ++= Seq(
// "org.apache.spark" %% "spark-core" % "1.5.0-cdh5.5.0" % "provided",
// "org.apache.spark" %% "spark-sql" % "1.5.0-cdh5.5.0" % "provided",
// "org.scalatest"%"scalatest_2.10" % "2.2.4" % "test",
// "com.github.nscala-time" %% "nscala-time" % "1.6.0"
// )
resolvers ++= Seq(
"cloudera" at "http://repository.cloudera.com/artifactory/cloudera-repos/",
"Job Server Bintray" at "http://dl.bintray.com/spark-jobserver/maven"
)
scalacOptions ++= Seq("-unchecked", "-deprecation")
assemblyMergeStrategy in assembly := {
case PathList("META-INF", xs # _*) => MergeStrategy.discard
case x => MergeStrategy.first
}
parallelExecution in Test := false
fork in Test := true
javaOptions ++= Seq("-Xms512M", "-Xmx4048M", "-XX:MaxPermSize=4048M", "-XX:+CMSClassUnloadingEnabled")
It was a memory Issue.
I referred to the following Answer: Answer.
And in my C:\Program Files (x86)\sbt\conf\sbtconfig File, I added/increased the below params for memory.
-Xmx2G
-XX:MaxPermSize=1000m
-XX:ReservedCodeCacheSize=1000m
-Xss8M
And running sbt package has seamlessly worked and Compilation succeeded.
Thank you All.
I am doing small scala program which converts csv to parquet.
I am using databricks spark-csv.
Here's is my build.sbt
name: = "tst"
version: = "1.0"
scalaVersion: = "2.10.5"
libraryDependencies++ = Seq(
"org.apache.spark" % % "spark-core" % "1.6.1" % "provided",
"org.apache.spark" % % "spark-sql" % "1.6.1",
"com.databricks" % "spark-csv_2.10" % "1.5.0",
"org.apache.spark" % % "spark-hive" % "1.6.1",
"org.apache.commons" % "commons-csv" % "1.1",
"com.univocity" % "univocity-parsers" % "1.5.1",
"org.slf4j" % "slf4j-api" % "1.7.5" % "provided",
"org.scalatest" % % "scalatest" % "2.2.1" % "test",
"com.novocode" % "junit-interface" % "0.9" % "test",
"com.typesafe.akka" % "akka-actor_2.10" % "2.3.11",
"org.scalatest" % % "scalatest" % "2.2.1",
"com.holdenkarau" % % "spark-testing-base" % "1.6.1_0.3.3",
"com.databricks" % "spark-csv_2.10" % "1.5.0",
"org.joda" % "joda-convert" % "1.8.1"
)
After sbt package, when I run command
spark-submit --master local[*] target/scala-2.10/tst_2.10-1.0.jar
I get following error.
Exception in thread "main" java.lang.ClassNotFoundException: Failed to find data source: com.databricks.spark.csv. Please find packages at http://spark-packages.org
I can see the com.databricks_spark-csv_2.10-1.5.0.jar file in ~/.ivy2/jars/ downloaded by sbt package command
The source code of the dataconversion.scala
import org.apache.spark.sql.SQLContext
import org.apache.spark.SparkContext
import org.apache.spark.SparkContext._
import org.apache.spark.SparkConf
object dataconversion {
def main(args: Array[String]) {
val conf =
new SparkConf()
.setAppName("ClusterScore")
.set("spark.storage.memoryFraction", "1")
val sc = new SparkContext(conf)
val sqlc = new SQLContext(sc)
val df = sqlc.read
.format("com.databricks.spark.csv")
.option("header", "true") // Use first line of all files as header
.option("inferSchema", "true") // Automatically infer data types
.load("/tmp/cars.csv")
println(df.printSchema)
}
}
I can do spark-submit without error if I specify --jars option with explicit jar path. But that's not ideal. Please suggest.
Use the sbt-assembly plugin to build a "fat jar" containing all your dependencies with sbt assembly, and then call spark-submit on that.
In general, when you get ClassNotFoundException, try exploding the jar you created to see what's in it with jar tvf target/scala-2.10/tst_2.10-1.0.jar. Checking what's in the Ivy cache is meaningless; that just tells you that SBT found it. As mathematicians say, that's necessary but not sufficient.
The mentioned library is required so you have options:
Place com.databricks_spark-csv_2.10-1.5.0.jar in local or hdfs
reachable path and provide as dependency with --jars parameter
Using --packages com.databricks:spark-csv_2.10:1.5.0 which will
provide required lib to your process
To build fat jar with your dependencies and forget about --jars
I am trying to add Firebase in my play framework project. I followed the following link
https://medium.com/#RICEaaron/scala-firebase-da433df93bd2#.m1fwlvc8l
I am done with following steps
created project in firebase developer console
generated private server key and downloaded the json file
Added firebase server sdk dependency in build.sbt
This is my build.sbt code:
name := """NeutrinoRPM"""
version := "1.0-SNAPSHOT"
lazy val root = (project in file(".")).enablePlugins(PlayJava)
scalaVersion := "2.11.1"
resolvers += Resolver.sonatypeRepo("snapshots")
libraryDependencies ++= Seq(
javaJdbc,
cache,
javaWs,
javaCore,
"ws.securesocial" %% "securesocial" % "3.0-M3",
"org.julienrf" %% "play-jsmessages" % "1.6.2",
javaJpa.exclude("org.hibernate.javax.persistence", "hibernate-jpa-2.0-api"),
"org.hibernate" % "hibernate-entitymanager" % "4.3.4.Final",
"mysql" % "mysql-connector-java" % "5.1.9",
"com.typesafe.play" %% "play-mailer" % "2.4.0",
"com.nimbusds" % "nimbus-jose-jwt" % "3.8.2",
"com.wordnik" %% "swagger-play2" % "1.3.12",
"org.webjars" % "swagger-ui" % "2.1.8-M1",
"com.google.api-client" % "google-api-client" % "1.21.0",
"com.google.apis" % "google-api-services-analytics" % "v3-rev127-1.21.0",
"com.google.code.gson" % "gson" % "2.6.2",
"com.google.http-client" % "google-http-client-gson" % "1.21.0",
"org.apache.pdfbox" % "pdfbox" % "2.0.1",
"com.google.firebase" % "firebase-server-sdk" % "3.0.1"
)
Now I am trying to initialize the Firebase server SDK with this code snippet:
FileInputStream serviceAccount = new FileInputStream("path/to/serviceAccountKey.json");
FirebaseOptions options = new FirebaseOptions.Builder()
.setCredential(FirebaseCredentials.fromCertificate(serviceAccount))
.setDatabaseUrl("https://<DATABASE_NAME>.firebaseio.com/")
.build();
FirebaseApp.initializeApp(options);
But when i try to import
com.google.firebase.FirebaseApplication
com.google.firebase.FirebaseOptions
com.google.firebase.database
I get this error: The import com.google.firebase.FirebaseApplication can not be resolved
I spent too many hours on google to search the solution to my problem but ended up with no help. Please help me.
Your dependency on the Firebase server SDK is old:
"com.google.firebase" % "firebase-server-sdk" % "3.0.1"
For new Firebase projects created through firebase.google.com, you should be using the Firebase Admin SDK when running in the JVM. The maven dependency is com.google.firebase:firebase-admin:4.1.0.
There is no FirebaseApplication in that SDK - perhaps you are instead looking for FirebaseApp?
Recently upgraded to Play 2.4 and I'm still learning all the little quirks and such. I'm trying to just get my index page working and I'm having a hard time of it, and I know it's something small I'm missing. Here is the error
CreationException: Unable to create injector, see the following errors:
1) Error in custom provider, Configuration error: Configuration error[Cannot connect to database [default]]
while locating play.api.db.DBApiProvider
while locating play.api.db.DBApi
for parameter 0 at play.db.DefaultDBApi.<init>(DefaultDBApi.java:28)
at play.db.DefaultDBApi.class(DefaultDBApi.java:28)
while locating play.db.DefaultDBApi
while locating play.db.DBApi
for field at play.db.DBModule$NamedDatabaseProvider.dbApi(DBModule.java:61)
while locating play.db.DBModule$NamedDatabaseProvider
at com.google.inject.util.Providers$GuicifiedProviderWithDependencies.initialize(Providers.java:149)
at play.db.DBModule.bindings(DBModule.java:40):
And my configuration information (yes that port number for mysql is correct)
application.conf
db.default.driver=com.mysql.jdbc.Driver
db.default.url="jdbc:mysql:localhost:33060/coffee_bean"
db.default.username=
db.default.password=""
ebean.default = ["models.*"]
Main controller
public Result index() {
ObjectNode response = Json.newObject();
Configuration config = Play.application().configuration();
response.put(config.getString("coffee.bean.message.key"),config.getString("coffee.bean.success.message"));
response.put(config.getString("version"), config.getString("coffee.bean.version"));
return ok(response);
}
built.sbt
name := """coffee-bean"""
version := "1.0-SNAPSHOT"
lazy val root = (project in file(".")).enablePlugins(PlayJava)
scalaVersion := "2.11.6"
libraryDependencies ++= Seq(
javaJdbc,
cache,
javaWs,
"mysql" % "mysql-connector-java" % "5.1.36",
evolutions
)
// Play provides two styles of routers, one expects its actions to be injected, the
// other, legacy style, accesses its actions statically.
routesGenerator := InjectedRoutesGenerator
lazy val myProject = (project in file("."))
.enablePlugins(PlayJava, PlayEbean)
plugins.sbt
// The Play plugin
addSbtPlugin("com.typesafe.play" % "sbt-plugin" % "2.4.2")
// Web plugins
addSbtPlugin("com.typesafe.sbt" % "sbt-coffeescript" % "1.0.0")
addSbtPlugin("com.typesafe.sbt" % "sbt-less" % "1.0.6")
addSbtPlugin("com.typesafe.sbt" % "sbt-jshint" % "1.0.3")
addSbtPlugin("com.typesafe.sbt" % "sbt-rjs" % "1.0.7")
addSbtPlugin("com.typesafe.sbt" % "sbt-digest" % "1.1.0")
addSbtPlugin("com.typesafe.sbt" % "sbt-mocha" % "1.1.0")
// Play enhancer - this automatically generates getters/setters for public fields
// and rewrites accessors of these fields to use the getters/setters. Remove this
// plugin if you prefer not to have this feature, or disable on a per project
// basis using disablePlugins(PlayEnhancer) in your build.sbt
addSbtPlugin("com.typesafe.sbt" % "sbt-play-enhancer" % "1.1.0")
// Play Ebean support, to enable, uncomment this line, and enable in your build.sbt using
// enablePlugins(SbtEbean). Note, uncommenting this line will automatically bring in
// Play enhancer, regardless of whether the line above is commented out or not.
addSbtPlugin("com.typesafe.sbt" % "sbt-play-ebean" % "1.0.0")
addSbtPlugin("com.typesafe.play" % "sbt-plugin" % "2.4.0")
Anything else you need to see?
So far I've tried just reimporting dependencies, double checking to make sure version are right, going through the documentation again, and so far nothing has worked. I'm expecting a certain user will have all the answers :)
Database connection configuration is missing.
Define it in conf\application.conf:
db.default.driver=com.mysql.jdbc.Driver
db.default.url="jdbc:mysql://localhost/playdb"
db.default.username=playdbuser
db.default.password="a strong password"
My config:
<persistence xmlns="http://xmlns.jcp.org/xml/ns/persistence"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://xmlns.jcp.org/xml/ns/persistence http://xmlns.jcp.org/xml/ns/persistence/persistence_2_1.xsd"
version="2.1">
<persistence-unit name="defaultPersistenceUnit" transaction-type="RESOURCE_LOCAL">
<provider>org.hibernate.jpa.HibernatePersistenceProvider</provider>
<non-jta-data-source>caribbeanDS</non-jta-data-source>
<properties>
<property name="hibernate.dialect" value="org.hibernate.dialect.H2Dialect"/>
<property name="hibernate.hbm2ddl.auto" value="update"/>
</properties>
</persistence-unit>
</persistence>
My code:
version := "1.0-SNAPSHOT"
lazy val root = (project in file(".")).enablePlugins(PlayJava)
scalaVersion := "2.11.6"
libraryDependencies ++= Seq(
javaJdbc,
cache,
javaWs,
javaJpa,
"org.hibernate" % "hibernate-entitymanager" % "4.3.10.Final" // replace by your jpa implementation
)
// Play provides two styles of routers, one expects its actions to be injected, the
// other, legacy style, accesses its actions statically.
routesGenerator := InjectedRoutesGenerator
fork in run := true
# Default database configuration using MySQL database engine
# Connect to playdb as playdbuser
db.caribbean.driver=com.mysql.jdbc.Driver
db.caribbean.url="jdbc:mysql://localhost:8889/bahamasnet"
db.caribbean.username=root
db.caribbean.password="root"
db.caribbean.jndiName=caribbeanDS
jpa.default=defaultPersistenceUnit