ChangeFileModeByMask error (5): Access is denied - java

I accessed MySQL database and fetched the table.
Everything is working fine till that.
when i am trying to save the records in text or other formats i am getting the error
Exit Code Exception exit Code=1: 'Change File Mode By Mask error' (5): Access is denied.
Any help will be appreciated.
object jdbcConnect {
def main(args: Array[String]) {
val url="jdbc:mysql://127.0.0.1:3306/mydb"
val username = "root"
val password = "token_password"
Class.forName("com.mysql.jdbc.Driver").newInstance
//DriverManager.registerDriver(new com.mysql.jdbc.Driver());
val conf = new SparkConf().setAppName("JDB CRDD").setMaster("local[2]").set("spark.executor.memory", "1g")
val sc = new SparkContext(conf)
val myRDD = new JdbcRDD( sc, () =>
DriverManager.getConnection(url,username,password) ,
"select s_Id,issue_date from store_details limit ?, ?",
0, 10, 1, r => r.getString("s_Id") + ", " + r.getString("issue_date"))
myRDD.foreach(println)
myRDD.saveAsTextFile("C:/jdbcrddexamplee")
}
}
Error
17/07/18 11:10:19 ERROR Executor: Exception in task 0.0 in stage 2.0
(TID 2) ExitCodeException exitCode=1: ChangeFileModeByMask error (5):
Access is denied.
at org.apache.hadoop.util.Shell.runCommand(Shell.java:582) at
org.apache.hadoop.util.Shell.run(Shell.java:479) at
org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:773)
at org.apache.hadoop.util.Shell.execCommand(Shell.java:866) at
org.apache.hadoop.util.Shell.execCommand(Shell.java:849) at
org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:733)
at
org.apache.hadoop.fs.RawLocalFileSystem$LocalFSFileOutputStream.(RawLocalFileSystem.java:225)
at
org.apache.hadoop.fs.RawLocalFileSystem$LocalFSFileOutputStream.(RawLocalFileSystem.java:209)
at
org.apache.hadoop.fs.RawLocalFileSystem.createOutputStreamWithMode(RawLocalFileSystem.java:307)
at
org.apache.hadoop.fs.RawLocalFileSystem.create(RawLocalFileSystem.java:296)
at
org.apache.hadoop.fs.RawLocalFileSystem.create(RawLocalFileSystem.java:328)

It seemed to be a permission error. My foolishness...
Make sure to run anything as an admin. Though i will suggest to use dataframe instead of RDD :D

Related

java.nio.ByteBuffer wrap method partly working with sbt run

I have an issue where I read a bytestream from a big file ~ (100MB) and after some integers I get the value 0 (but only with sbt run ). When I hit the play button on IntelliJ I get the value I expected > 0.
My guess was that the environment is somehow different. But I could not spot the difference.
// DemoApp.scala
import java.nio.{ByteBuffer, ByteOrder}
object DemoApp extends App {
val inputStream = getClass.getResourceAsStream("/HandRanks.dat")
val handRanks = new Array[Byte](inputStream.available)
inputStream.read(handRanks)
inputStream.close()
def evalCard(value: Int) = {
val offset = value * 4
println("value: " + value)
println("offset: " + offset)
ByteBuffer.wrap(handRanks, offset, handRanks.length - offset).order(ByteOrder.LITTLE_ENDIAN).getInt
}
val cards: List[Int] = List(51, 45, 14, 2, 12, 28, 46)
def eval(cards: List[Int]): Unit = {
var p = 53
cards.foreach(card => {
println("p = " + evalCard(p))
p = evalCard(p + card)
})
println("result p: " + p);
}
eval(cards)
}
The HandRanks.dat can be found here: (I put it inside a directory called resources)
https://github.com/Robert-Nickel/scala-texas-holdem/blob/master/src/main/resources/HandRanks.dat
build.sbt is:
name := "LoadInts"
version := "0.1"
scalaVersion := "2.13.4"
On my windows machine I use sbt 1.4.6 with Oracle Java 11
You will see that the evalCard call will work 4 times but after the fifth time the return value is 0. It should be higher than 0, which it is when using IntelliJ's play button.
You are not reading a whole content. This
val handRanks = new Array[Byte](inputStream.available)
allocates only as much as InputStream buffer and then you read the amount in buffer with
inputStream.read(handRanks)
Depending of defaults you will process different amount but they will never be 100MB of data. For that you would have to read data into some structure in the loop (bad idea) or process it in chunks (with iterators, stream, etc).
import scala.util.Using
// Using will close the resource whether error happens or not
Using(getClass.getResourceAsStream("/HandRanks.dat")) { inputStream =>
def readChunk(): Option[Array[Byte]] = {
// can be done better, but that's not the point here
val buffer = new Array[Byte](inputStream.available)
val bytesRead = inputStream.read(buffer)
if (bytesRead >= 0) Some(buffer.take(bytesRead))
else None
}
#tailrec def process(): Unit = {
readChunk() match {
case Some(chunk) =>
// do something
process()
case None =>
// nothing to do - EOF reached
}
}
process()
}

Firebase Java Admin SDK 5.6.0 FirestoreClient constantly throws io.grpc.StatusRuntimeException: UNKNOWN

When using the Firebase Admin SDK 5.6.0 for Java (in a Scala Play! application), we are constantly getting io.grpc.StatusRuntimeException: UNKNOWN anytime we get or set data using the FirestoreClient. The auth functions of Firebase seem to work without any issue, however.
Here is the exception we are getting:
ERROR application - method=GET uri=/v1/users/synchAllUsers remote-address=0:0:0:0:0:0:0:1 status=500 error=class java.util.concurrent.ExecutionException: io.grpc.StatusRuntimeException: UNKNOWN
com.google.common.util.concurrent.AbstractFuture.getDoneValue(AbstractFuture.java:503)
com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:482)
com.google.api.core.AbstractApiFuture.get(AbstractApiFuture.java:56)
services.FirebaseAdminService.createToken(FirebaseAdminService.scala:98)
services.UsersService.$anonfun$synchAllUsers$2(UsersService.scala:37)
scala.collection.Iterator.foreach(Iterator.scala:929)
scala.collection.Iterator.foreach$(Iterator.scala:929)
scala.collection.AbstractIterator.foreach(Iterator.scala:1417)
scala.collection.IterableLike.foreach(IterableLike.scala:71)
scala.collection.IterableLike.foreach$(IterableLike.scala:70)
scala.collection.AbstractIterable.foreach(Iterable.scala:54)
services.UsersService.$anonfun$synchAllUsers$1(UsersService.scala:34)
services.UsersService.$anonfun$synchAllUsers$1$adapted(UsersService.scala:34)
scala.util.Success.$anonfun$map$1(Try.scala:251)
scala.util.Success.map(Try.scala:209)
scala.concurrent.Future.$anonfun$map$1(Future.scala:287)
scala.concurrent.impl.Promise.liftedTree1$1(Promise.scala:29)
scala.concurrent.impl.Promise.$anonfun$transform$1(Promise.scala:29)
scala.concurrent.impl.CallbackRunnable.run(Promise.scala:60)
akka.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:55)
akka.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:91)
scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:12)
scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:81)
akka.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:91)
akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:38)
akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:43)
akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
Here is the code we are running:
private lazy val app = {
FirebaseApp.getApps().stream().filter(a => a.getName == FirebaseApp.DEFAULT_APP_NAME).findFirst().orElseGet(
() => {
//val serviceAccount = new ByteArrayInputStream(Firebase.serviceAccountKey.getBytes(StandardCharsets.UTF_8.name()))
val serviceAccount = new FileInputStream("conf/gcp_service_account.json")
val options = new FirebaseOptions.Builder()
.setCredentials(GoogleCredentials.fromStream(serviceAccount))
//.setDatabaseUrl("https://festive-bazaar-146119.firebaseio.com")
.build()
serviceAccount.close()
FirebaseApp.initializeApp(options, FirebaseApp.DEFAULT_APP_NAME)
}
)
}
def createToken(user: User, claims: Map[String, Object] = Map()) = {
val auth = FirebaseAuth.getInstance(app)
val db = FirestoreClient.getFirestore(app)
val firebaseUser = getUserById(auth, user.id.toString)
if (firebaseUser == null) {
auth.createUserAsync(new CreateRequest()
.setUid(user.id.toString)
.setDisplayName(user.firstName + ' ' + user.lastName)
.setEmail(user.email)
.setEmailVerified(true)
.setDisabled(false)).get()
}
else {
auth.updateUserAsync(new UpdateRequest(user.id.toString)
.setDisplayName(user.firstName + ' ' + user.lastName)
.setEmail(user.email)
.setEmailVerified(true)
.setDisabled(false)).get()
}
// Set the user's info in our user metadata area
val userInfoRec = new ImmutableMap.Builder[String, String]()
.put("id", user.id.toString)
.put("name", user.email)
.put("email", user.email)
.put("firstName", user.firstName)
.put("lastName", user.lastName)
.build()
db.collection("tenants").document(user.companyId.toString).collection("users").document(user.id.toString).get()
val users = db.collection("tenants").get.get
val result = db.collection("tenants").document(user.companyId.toString).collection("users").document(user.id.toString).set(userInfoRec)
result.get() // **** This triggers the exception shown above, everytime ****
auth.createCustomTokenAsync(user.id.toString, (claims + ("companyId" -> user.companyId.asInstanceOf[Object])).asJava).get()
}
The comment above with the **** notes that is the line that triggers the exception. Has anyone else run into this problem or knows anything about this? In its current state, it makes the FirestoreClient completely useless to us as we can neither get or set data to the Firestore. I've checked the documentation, API reference, and generally Googled around but can't seem to find anything useful.

Not able to connect to Cloud SQL using Java SocketFactory Library

I am trying to connect to Cloud SQL ( Mysql ) using my java code. I am getting the below error -
com.mysql.jdbc.exceptions.jdbc4.MySQLNonTransientConnectionException: Could not create socket factory 'com.google.cloud.sql.mysql.SocketFactory' due to underlying exception:
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:185)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:210)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:124)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.lang.ClassNotFoundException: com.google.cloud.sql.mysql.SocketFactory
Here is my code -
package utils
import java.sql.DriverManager
import java.sql.Connection
import scala.collection.mutable.ListBuffer
import entity.AnalyticFieldEntity
import compute.driver.AnalyticTools
import entity.ErrorHandlingEntity
object ScalaDbConnect {
def getAnalyticBatchMap(toolId : Int, paramMap: Map[String, String]): Map[String, Int] = {
val methodName = "getAnalyticBatchMap"
val errorMode = paramMap.get("mode")+"("+paramMap.get("analyticSource")+")"
val dbTuple = DbPropertiesReader.getDbProperties()
val databaseName = dbTuple._3
val instanceConnectionName = dbTuple._4
val username= dbTuple._1
val password= dbTuple._2
var connection: Connection = null
val analyticMap = collection.mutable.Map.empty[String, Int]
try {
//[START doc-example]
val jdbcUrl = String.format(
"jdbc:mysql://google/%s?cloudSqlInstance=%s&"
+ "socketFactory=com.google.cloud.sql.mysql.SocketFactory", databaseName, instanceConnectionName);
println(jdbcUrl);
//Class.forName("com.mysql.jdbc.GoogleDriver");
val connection = DriverManager.getConnection(jdbcUrl, username, password);
println(connection);
//[END doc-example]
try
{
val statement = connection.createStatement()
val resultSet = statement.executeQuery("SELECT omnitureColumnHeader.columnHeaderId, case when analyticFieldMap.isTag = 1 then concat(\"tag_\",analyticFieldMap.entityField) else " +
"analyticFieldMap.entityField end as entityField FROM omnitureColumnHeader INNER JOIN analyticFieldMap ON " +
"analyticFieldMap.analyticFieldBatch=omnitureColumnHeader.columnHeaderValue where analyticFieldMap.toolId = " + toolId);
System.out.println("resultSet: 2" + statement);
System.out.println("statement: 2" + resultSet);
while (resultSet.next()) {
System.out.println("inside the content loop: 2");
analyticMap += resultSet.getString("entityField") -> resultSet.getInt("columnHeaderId")
}
System.out.println("analyticMap: 2" + analyticMap);
}
catch
{
case _: Throwable => println("Got some other kind of exception")
}
} catch {
case e: Exception =>
val errorHandlingEntity = new ErrorHandlingEntity()
errorHandlingEntity.Mode=errorMode
errorHandlingEntity.Tool=paramMap.get("tool").toString()
errorHandlingEntity.Message="DB Connection Issue"
errorHandlingEntity.Trace=e.printStackTrace().toString()
errorHandlingEntity.Source = "Spark"
errorHandlingEntity.YarnAppId=paramMap.get("appID").toString()
errorHandlingEntity.MethodName=methodName
errorHandlingEntity.ReThrow = true
errorHandlingEntity.CurrentException=e
ErrorHandlingFramework.HandleException(errorHandlingEntity)
}
connection.close()
analyticMap.toMap
}
}
I have added the below details in my POM.XML
<dependency>
<groupId>com.google.cloud.sql</groupId>
<artifactId>mysql-socket-factory</artifactId>
<version>1.0.3</version>
</dependency>
Here is the complete POM.XML - https://pastebin.com/jvxSBZMX
I am trying to connect to Google Cloud SQL using my scala code and i am using the JAVA API(S).
The issue i am facing indicates, i am not able to access the correct class for the connection.
Any help would be appreciated.
Looking forward for the solution.
Thanks,
The issue is with how google cloud runs the maven Build.
It is not able to read the classes from the build so i passed those JAR Files with the extention --JARS .
This solves my issue.

JdbcRDD error : connection established data fetched partially

I tried to connect to a mysql database, to fetch table records. I can establish the connection, and 10 records are fetched as well, but then suddenly the code crashes. I don't know why. PS: i am new to scala... Any help would be appreciated.
object jdbcConnect {
def main(args: Array[String]) {
val url="jdbc:mysql://127.0.0.1:3306/mydb"
val username = "root"
val password = "token_password"
Class.forName("com.mysql.jdbc.Driver").newInstance
//DriverManager.registerDriver(new com.mysql.jdbc.Driver());
val conf = new SparkConf().setAppName("JDBC RDD").setMaster("local[2]").set("spark.executor.memory", "1g")
val sc = new SparkContext(conf)
val myRDD = new JdbcRDD( sc, () => DriverManager.getConnection(url,username,password) ,
"select s_Id,issue_date from store_details limit ?, ?",
0, 10, 1, r => r.getString("s_Id") + ", " + r.getString("issue_date"))
myRDD.foreach(println)
myRDD.saveAsTextFile("C:/jdbcrddexamplee")
}
}
ERROR
17/07/16 02:32:24 ERROR Executor: Exception in task 0.0 in stage 1.0 (TID 1)
ExitCodeException exitCode=1: ChangeFileModeByMask error (5): Access is denied. at org.apache.hadoop.util.Shell.runCommand(Shell.java:582)
at org.apache.hadoop.util.Shell.run(Shell.java:479)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:773)
at org.apache.hadoop.util.Shell.execCommand(Shell.java:866)
at org.apache.hadoop.util.Shell.execCommand(Shell.java:849)
at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:733)
at org.apache.hadoop.fs.RawLocalFileSystem$LocalFSFileOutputStream.(RawLocalFileSystem.java:225)
at org.apache.hadoop.fs.RawLocalFileSystem$LocalFSFileOutputStream.(RawLocalFileSystem.java:209)
at org.apache.hadoop.fs.RawLocalFileSystem.createOutputStreamWithMode(RawLocalFileSystem.java:307)
at org.apache.hadoop.fs.RawLocalFileSystem.create(RawLocalFileSystem.java:296)
at org.apache.hadoop.fs.RawLocalFileSystem.create(RawLocalFileSystem.java:328)
at org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSOutputSummer.(ChecksumFileSystem.java:398)
at org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:461)
at org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:440)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:911)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:804)
It seemed to be a permission error.
My foolishness...
Make sure to run anything as an admin.
Though i will suggest to use dataframe instead of RDD :D
Thanks

Log producer object in Scala

I am new to scala and java altogether and trying to run a sample producer code. All it does is, takes some raw products and referrers stored in csv files and uses rnd to generate some random log. Following is my code:
object LogProducer extends App {
//WebLog config
val wlc = Settings.WebLogGen
val Products = scala.io.Source.fromInputStream(getClass.getResourceAsStream("/products.csv")).getLines().toArray
val Referrers = scala.io.Source.fromInputStream(getClass.getResourceAsStream("/referrers.csv")).getLines().toArray
val Visitors = (0 to wlc.visitors).map("Visitors-" + _)
val Pages = (0 to wlc.pages).map("Pages-" + _)
val rnd = new Random()
val filePath = wlc.filePath
val fw = new FileWriter(filePath, true)
//adding randomness to time increments for demo
val incrementTimeEvery = rnd.nextInt(wlc.records - 1) + 1
var timestamp = System.currentTimeMillis()
var adjustedTimestamp = timestamp
for (iteration <- 1 to wlc.records) {
adjustedTimestamp = adjustedTimestamp + ((System.currentTimeMillis() - timestamp) * wlc.timeMultiplier)
timestamp = System.currentTimeMillis()
val action = iteration % (rnd.nextInt(200) + 1) match {
case 0 => "purchase"
case 1 => "add_to_cart"
case _ => "page_view"
}
val referrer = Referrers(rnd.nextInt(Referrers.length - 1))
val prevPage = referrer match {
case "Internal" => Pages(rnd.nextInt(Pages.length - 1))
case _ => ""
}
val visitor = Visitors(rnd.nextInt(Visitors.length - 1))
val page = Pages(rnd.nextInt(Pages.length - 1))
val product = Products(rnd.nextInt(Products.length - 1))
val line = s"$adjustedTimestamp\t$referrer\t$action\t$prevPage\t$visitor\t$page\t$product\n"
fw.write(line)
if (iteration % incrementTimeEvery == 0) {
//os.flush()
println(s"Sent $iteration messages!")
val sleeping = rnd.nextInt(incrementTimeEvery * 60)
println(s"Sleeping for $sleeping ms")
}
}
}
It is pretty straightforward where it is basically generating some variables and adding it to the line.
However I am getting a big exception error stack which i am not able to understand:
"C:\Program Files\Java\jdk1.8.0_92\bin\java...
Exception in thread "main" java.nio.charset.MalformedInputException: Input length = 1
at java.nio.charset.CoderResult.throwException(CoderResult.java:281)
at sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:339)
at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:178)
at java.io.InputStreamReader.read(InputStreamReader.java:184)
at java.io.BufferedReader.fill(BufferedReader.java:161)
at java.io.BufferedReader.readLine(BufferedReader.java:324)
at java.io.BufferedReader.readLine(BufferedReader.java:389)
at scala.io.BufferedSource$BufferedLineIterator.hasNext(BufferedSource.scala:70)
at scala.collection.Iterator.foreach(Iterator.scala:929)
at scala.collection.Iterator.foreach$(Iterator.scala:929)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1417)
at scala.collection.generic.Growable.$plus$plus$eq(Growable.scala:59)
at scala.collection.generic.Growable.$plus$plus$eq$(Growable.scala:50)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:104)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:48)
at scala.collection.TraversableOnce.to(TraversableOnce.scala:310)
at scala.collection.TraversableOnce.to$(TraversableOnce.scala:308)
at scala.collection.AbstractIterator.to(Iterator.scala:1417)
at scala.collection.TraversableOnce.toBuffer(TraversableOnce.scala:302)
at scala.collection.TraversableOnce.toBuffer$(TraversableOnce.scala:302)
at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1417)
at scala.collection.TraversableOnce.toArray(TraversableOnce.scala:289)
at scala.collection.TraversableOnce.toArray$(TraversableOnce.scala:283)
at scala.collection.AbstractIterator.toArray(Iterator.scala:1417)
at clickstream.LogProducer$.delayedEndpoint$clickstream$LogProducer$1(logProducer.scala:16)
at clickstream.LogProducer$delayedInit$body.apply(logProducer.scala:12)
at scala.Function0.apply$mcV$sp(Function0.scala:34)
at scala.Function0.apply$mcV$sp$(Function0.scala:34)
at scala.runtime.AbstractFunction0.apply$mcV$sp(AbstractFunction0.scala:12)
at scala.App.$anonfun$main$1$adapted(App.scala:76)
at scala.collection.immutable.List.foreach(List.scala:389)
at scala.App.main(App.scala:76)
at scala.App.main$(App.scala:74)
at clickstream.LogProducer$.main(logProducer.scala:12)
at clickstream.LogProducer.main(logProducer.scala)
Process finished with exit code 1
Can someone please help me identify what the exception mean? Thanks all
So it wasnt hard.. it was my amateurish knowledge. It was a simple IO exception where Intellij wasnt able to get the values from my csv file. When i imported it into resources root directory, it gave me a warning message of wrong encoding.
The error was at this point:
val Products = scala.io.Source.fromInputStream(getClass.getResourceAsStream("/products.csv")).getLines().toArray
thanks for efforts though
It was an encoding issue, for Scala a quick fix would be:
replace:
val Products=scala.io.Source.fromInputStream(getClass.getResourceAsStream("/products.csv")).getLines().toArray
val Referrers = scala.io.Source.fromInputStream(getClass.getResourceAsStream("/referrers.csv")).getLines().toArray
using this:
val Products=scala.io.Source.fromInputStream(getClass.getResourceAsStream("/products.csv"))("UTF-8").getLines().toArray
val Referrers = scala.io.Source.fromInputStream(getClass.getResourceAsStream("/referrers.csv"))("UTF-8").getLines().toArray
For java and more details please check out this link: http://biercoff.com/malformedinputexception-input-length-1-exception-solution-for-scala-and-java/

Categories

Resources