I have an issue where I read a bytestream from a big file ~ (100MB) and after some integers I get the value 0 (but only with sbt run ). When I hit the play button on IntelliJ I get the value I expected > 0.
My guess was that the environment is somehow different. But I could not spot the difference.
// DemoApp.scala
import java.nio.{ByteBuffer, ByteOrder}
object DemoApp extends App {
val inputStream = getClass.getResourceAsStream("/HandRanks.dat")
val handRanks = new Array[Byte](inputStream.available)
inputStream.read(handRanks)
inputStream.close()
def evalCard(value: Int) = {
val offset = value * 4
println("value: " + value)
println("offset: " + offset)
ByteBuffer.wrap(handRanks, offset, handRanks.length - offset).order(ByteOrder.LITTLE_ENDIAN).getInt
}
val cards: List[Int] = List(51, 45, 14, 2, 12, 28, 46)
def eval(cards: List[Int]): Unit = {
var p = 53
cards.foreach(card => {
println("p = " + evalCard(p))
p = evalCard(p + card)
})
println("result p: " + p);
}
eval(cards)
}
The HandRanks.dat can be found here: (I put it inside a directory called resources)
https://github.com/Robert-Nickel/scala-texas-holdem/blob/master/src/main/resources/HandRanks.dat
build.sbt is:
name := "LoadInts"
version := "0.1"
scalaVersion := "2.13.4"
On my windows machine I use sbt 1.4.6 with Oracle Java 11
You will see that the evalCard call will work 4 times but after the fifth time the return value is 0. It should be higher than 0, which it is when using IntelliJ's play button.
You are not reading a whole content. This
val handRanks = new Array[Byte](inputStream.available)
allocates only as much as InputStream buffer and then you read the amount in buffer with
inputStream.read(handRanks)
Depending of defaults you will process different amount but they will never be 100MB of data. For that you would have to read data into some structure in the loop (bad idea) or process it in chunks (with iterators, stream, etc).
import scala.util.Using
// Using will close the resource whether error happens or not
Using(getClass.getResourceAsStream("/HandRanks.dat")) { inputStream =>
def readChunk(): Option[Array[Byte]] = {
// can be done better, but that's not the point here
val buffer = new Array[Byte](inputStream.available)
val bytesRead = inputStream.read(buffer)
if (bytesRead >= 0) Some(buffer.take(bytesRead))
else None
}
#tailrec def process(): Unit = {
readChunk() match {
case Some(chunk) =>
// do something
process()
case None =>
// nothing to do - EOF reached
}
}
process()
}
Related
I wrote a small app that pings a remote server by sending ICMP packets using sockets (Android's Os package). I tested it on the API 28 emulator and it worked, but it does not work on API 22 (Android 5.1 Lollipop). I tested it on the VD as well as on my smartphone (same old API) with the same negative result
android.system.ErrnoException: sendto failed: EINVAL (Invalid argument)
I debugged the app on different API versions and the only difference I noticed that ByteBuffer.wrap() produces ByteArrayBuffer instance on API 22 and HeapArrayBuffer instance on newer versions. The payload itself seems to be the same.
Here is the code sample (simplified). Also, there is a test application available: https://github.com/alexeysirenko/android-sockets-icmp-ping-test
You could try to launch it on the different emulators (API 22 and >22) and see the difference
private val timeoutMs = 5000
private val delayMs = 500L
private val ECHO_PORT = 80
private val POLLIN = (if (OsConstants.POLLIN == 0) 1 else OsConstants.POLLIN).toShort()
fun ping(host: String): Unit {
val inetAddress: InetAddress = InetAddress.getByName(host)
if (inetAddress is Inet6Address) throw Exception("IPv6 implementation omitted for simplicity")
val proto = OsConstants.IPPROTO_ICMP
val inet = OsConstants.AF_INET
val type = PacketBuilder.TYPE_ICMP_V4
val socketFileDescriptor = Os.socket(inet, OsConstants.SOCK_DGRAM, proto)
if (!socketFileDescriptor.valid()) throw Exception("Socket descriptor is invalid")
var sequenceNumber: Short = 0
for (i in 0..2) {
sequenceNumber++
val echoPacketBuilder =
PacketBuilder(type, "foobarbazquok".toByteArray())
.withSequenceNumber(sequenceNumber)
val buffer = echoPacketBuilder.build()
/**
* This is the command that throws an exception
*/
val bytesSent = Os.sendto(socketFileDescriptor, buffer,0, buffer.size, 0, inetAddress, ECHO_PORT)
// Response processing code omitted
}
}
class PacketBuilder(val type: Byte, val payload: ByteArray, val sequenceNumber: Short = 0, val identifier: Short = 0xDBB) {
private val MAX_PAYLOAD = 65507
private val CODE: Byte = 0
init {
if (payload.size > MAX_PAYLOAD) throw Exception("Payload limited to $MAX_PAYLOAD")
}
fun build(): ByteArray {
val buffer = ByteArray(8 + payload.size)
val byteBuffer = ByteBuffer.wrap(buffer)
byteBuffer.put(type)
byteBuffer.put(CODE)
val checkPos = byteBuffer.position()
byteBuffer.position(checkPos + 2)
byteBuffer.putShort(identifier)
byteBuffer.putShort(sequenceNumber)
byteBuffer.put(payload)
byteBuffer.putShort(checkPos, checksum(buffer))
byteBuffer.flip()
return buffer
}
fun withSequenceNumber(sequenceNumber: Short): PacketBuilder {
return PacketBuilder(type, payload, sequenceNumber, identifier)
}
/**
* RFC 1071 checksum
*/
private fun checksum(data: ByteArray): Short {
var sum = 0
// High bytes (even indices)
for (i in 0 until data.size step 2) {
sum += data[i].and(0xFF.toByte()).toInt() shl 8
sum = (sum and 0xFFFF) + (sum shr 16)
}
// Low bytes (odd indices)
for (i in 1 until data.size step 2) {
sum += data[i] and 0xFF.toByte()
sum = (sum and 0xFFFF) + (sum shr 16)
}
sum = (sum and 0xFFFF) + (sum shr 16)
return (sum xor 0xFFFF).toShort()
}
companion object {
val TYPE_ICMP_V4: Byte = 8
}
}
If my code is incorrect then I'd expect the same error on all platforms, but as I said it does work on all APIs newer than 22 and I do not know exactly what causes this issue
We have a problem with JavaFX MediaPlayer for more than a month now. We read and tried all the similar problems here, nothing worked.
Simplest example: It's an info app with 2 videos from a folder played continualy in a loop. After some time (sometimes 2h, sometimes 2 days)
videos start to slow down, duration is longer, occasional freeze, sound becomes distorted.
OS: win7 64b (Ultimate) SP1
Intel Celeron CPU N3150 #1.6GHz
8 GB Ram
java 1.8 and 1.10, 64b
Tried:
OS update, latest.
Drivers updates, other drivers.
JDK 64b, 1.8 latest updates, JDK 1.10 latest.
Changing code (adding dispose, closing, setting null, caching... every suggestion on stackoverflow)
Changing video encoders: MPEG-4, H.264 and H.264-HD, 1500/2000/4000/8000 kbps, 25fps
Changing resolutions: 1280x720,
Sound: AAC, 1 and 2 channels, 48 & 44 kHz, 128kbps bit rate and others. Tried videos without audio.
Almost every possible combination of these video/sound encoders, bitrates, resolutions, framerates ...
We profiled for memory leaks: memory grows for some time and it stops growing. It's ok (we think).
Nothing we tried has worked. No errors, no exceptions on MediaView, Media, Player. No system events.
Confusing: we noticed that that in normal operations CPU is 20-30%, but, when videos start to freeze, it's lower, 5-15%.
Code (latest version):
import java.io.File
import akka.actor.{Actor, ActorLogging, Props, Timers}
import akka.pattern._
import com.commercial.activity.MMDriver.ShowNextMedia
import javafx.event.EventHandler
import javafx.scene.media.{Media, MediaErrorEvent, MediaPlayer, MediaView}
import scala.concurrent.ExecutionContext.Implicits.global
import scala.concurrent.Future
object MMDriver {
def props(rootFolder: File, allowedExtension: List[String],
showMedia: (MMedia) => Unit): Props = Props(new MMDriver(rootFolder, allowedExtension, showMedia))
case object Start
case class MediaCreated(list: List[MMedia])
case object ShowNextMedia
}
class MMDriver(rootFolder: File, allowedExtensions: List[String],
showMedia: (MMedia) => Unit) extends Actor with ActorLogging with Timers {
object TimerKey
var started = false
var medias = List[MMedia]()
var index = 0
override def receive: Receive = {
case MMDriver.Start if (!started) =>
started = true
Future {
loadMedia(rootFolder, allowedExtensions)
.map { file =>
val media = new Media(file.toURI.toURL.toString)
val player = new MediaPlayer(media)
//next video
player.setOnEndOfMedia(new Runnable {
override def run(): Unit = {
self ! ShowNextMedia
}
})
val mediaView = new MediaView(player)
setErrorHandlers(player, media, mediaView)
MMedia(file, media, player, mediaView)
}
}
.map(media => MMDriver.MediaCreated(media.toList))
.pipeTo(self)
case MMDriver.MediaCreated(list) =>
medias = list
if (medias.nonEmpty) {
self ! ShowNextMedia
}
case MMDriver.ShowNextMedia =>
val media = medias(index)
showMedia(media)
index = index + 1
index = index % medias.size
case msg =>
log.info("not processed message {}", msg)
}
private def loadMedia(folder: File, allowedExtensions: List[String]): Array[File] = {
folder.listFiles()
.filter { file =>
allowedExtensions.exists(file.getName.endsWith(_))
}
}
private def setErrorHandlers(player: MediaPlayer, media: Media, mediaView: MediaView): Unit = {
player.setOnError(new Runnable {
override def run(): Unit = {
log.error(player.getError, "error in player")
}
})
media.setOnError(new Runnable {
override def run(): Unit = {
log.error(media.getError, "error in media")
}
})
mediaView.setOnError(new EventHandler[MediaErrorEvent] {
override def handle(event: MediaErrorEvent): Unit = {
log.error(event.getMediaError, "error in mediaView")
}
})
}
}
Any suggestion what to try next - we are out of ideas?!
I'm running a Spark application (Spark 1.6.3 cluster), which does some calculations on 2 small data sets, and writes the result into an S3 Parquet file.
Here is my code:
public void doWork(JavaSparkContext sc, Date writeStartDate, Date writeEndDate, String[] extraArgs) throws Exception {
SQLContext sqlContext = new org.apache.spark.sql.SQLContext(sc);
S3Client s3Client = new S3Client(ConfigTestingUtils.getBasicAWSCredentials());
boolean clearOutputBeforeSaving = false;
if (extraArgs != null && extraArgs.length > 0) {
if (extraArgs[0].equals("clearOutput")) {
clearOutputBeforeSaving = true;
} else {
logger.warn("Unknown param " + extraArgs[0]);
}
}
Date currRunDate = new Date(writeStartDate.getTime());
while (currRunDate.getTime() < writeEndDate.getTime()) {
try {
SparkReader<FirstData> sparkReader = new SparkReader<>(sc);
JavaRDD<FirstData> data1 = sparkReader.readDataPoints(
inputDir,
currRunDate,
getMinOfEndDateAndNextDay(currRunDate, writeEndDate));
// Normalize to 1 hours & 0.25 degrees
JavaRDD<FirstData> distinctData1 = data1.distinct();
// Floor all (distinct) values to 6 hour windows
JavaRDD<FirstData> basicData1BySixHours = distinctData1.map(d1 -> new FirstData(
d1.getId(),
TimeUtils.floorTimePerSixHourWindow(d1.getTimeStamp()),
d1.getLatitude(),
d1.getLongitude()));
// Convert Data1 to Dataframes
DataFrame data1DF = sqlContext.createDataFrame(basicData1BySixHours, FirstData.class);
data1DF.registerTempTable("data1");
// Read Data2 DataFrame
String currDateString = TimeUtils.getSimpleDailyStringFromDate(currRunDate);
String inputS3Path = basedirInput + "/dt=" + currDateString;
DataFrame data2DF = sqlContext.read().parquet(inputS3Path);
data2DF.registerTempTable("data2");
// Join data1 and data2
DataFrame mergedDataDF = sqlContext.sql("SELECT D1.Id,D2.beaufort,COUNT(1) AS hours " +
"FROM data1 as D1,data2 as D2 " +
"WHERE D1.latitude=D2.latitude AND D1.longitude=D2.longitude AND D1.timeStamp=D2.dataTimestamp " +
"GROUP BY D1.Id,D1.timeStamp,D1.longitude,D1.latitude,D2.beaufort");
// Create histogram per ID
JavaPairRDD<String, Iterable<Row>> mergedDataRows = mergedDataDF.toJavaRDD().groupBy(md -> md.getAs("Id"));
JavaRDD<MergedHistogram> mergedHistogram = mergedDataRows.map(new MergedHistogramCreator());
logger.info("Number of data1 results: " + data1DF.select("lId").distinct().count());
logger.info("Number of coordinates with data: " + data1DF.select("longitude","latitude").distinct().count());
logger.info("Number of results with beaufort histograms: " + mergedDataDF.select("Id").distinct().count());
// Save to parquet
String outputS3Path = basedirOutput + "/dt=" + TimeUtils.getSimpleDailyStringFromDate(currRunDate);
if (clearOutputBeforeSaving) {
writeWithCleanup(outputS3Path, mergedHistogram, MergedHistogram.class, sqlContext, s3Client);
} else {
write(outputS3Path, mergedHistogram, MergedHistogram.class, sqlContext);
}
} finally {
TimeUtils.progressToNextDay(currRunDate);
}
}
}
public void write(String outputS3Path, JavaRDD<MergedHistogram> outputRDD, Class outputClass, SQLContext sqlContext) {
// Apply a schema to an RDD of JavaBeans and save it as Parquet.
DataFrame fullDataDF = sqlContext.createDataFrame(outputRDD, outputClass);
fullDataDF.write().parquet(outputS3Path);
}
public void writeWithCleanup(String outputS3Path, JavaRDD<MergedHistogram> outputRDD, Class outputClass,
SQLContext sqlContext, S3Client s3Client) {
String fileKey = S3Utils.getS3Key(outputS3Path);
String bucket = S3Utils.getS3Bucket(outputS3Path);
logger.info("Deleting existing dir: " + outputS3Path);
s3Client.deleteAll(bucket, fileKey);
write(outputS3Path, outputRDD, outputClass, sqlContext);
}
public Date getMinOfEndDateAndNextDay(Date startTime, Date proposedEndTime) {
long endOfDay = startTime.getTime() - startTime.getTime() % MILLIS_PER_DAY + MILLIS_PER_DAY ;
if (endOfDay < proposedEndTime.getTime()) {
return new Date(endOfDay);
}
return proposedEndTime;
}
The size of data1 is around 150,000 and data2 is around 500,000.
What my code does is basically does some data manipulation, merges the 2 data objects, does a bit more manipulation, prints some statistics and saves to parquet.
The spark has 25GB of memory per server, and the code runs fine.
Each iteration takes about 2-3 minutes.
The problem starts when I run it on a large set of dates.
After a while, I get an OutOfMemory:
java.lang.OutOfMemoryError: GC overhead limit exceeded
at scala.collection.immutable.List.$colon$colon$colon(List.scala:127)
at org.json4s.JsonDSL$JsonListAssoc.$tilde(JsonDSL.scala:98)
at org.apache.spark.util.JsonProtocol$.taskEndToJson(JsonProtocol.scala:139)
at org.apache.spark.util.JsonProtocol$.sparkEventToJson(JsonProtocol.scala:72)
at org.apache.spark.scheduler.EventLoggingListener.logEvent(EventLoggingListener.scala:144)
at org.apache.spark.scheduler.EventLoggingListener.onTaskEnd(EventLoggingListener.scala:164)
at org.apache.spark.scheduler.SparkListenerBus$class.onPostEvent(SparkListenerBus.scala:42)
at org.apache.spark.scheduler.LiveListenerBus.onPostEvent(LiveListenerBus.scala:31)
at org.apache.spark.scheduler.LiveListenerBus.onPostEvent(LiveListenerBus.scala:31)
at org.apache.spark.util.ListenerBus$class.postToAll(ListenerBus.scala:55)
at org.apache.spark.util.AsynchronousListenerBus.postToAll(AsynchronousListenerBus.scala:38)
at org.apache.spark.util.AsynchronousListenerBus$$anon$1$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(AsynchronousListenerBus.scala:87)
at org.apache.spark.util.AsynchronousListenerBus$$anon$1$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply(AsynchronousListenerBus.scala:72)
at org.apache.spark.util.AsynchronousListenerBus$$anon$1$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply(AsynchronousListenerBus.scala:72)
at scala.util.DynamicVariable.withValue(DynamicVariable.scala:57)
at org.apache.spark.util.AsynchronousListenerBus$$anon$1$$anonfun$run$1.apply$mcV$sp(AsynchronousListenerBus.scala:71)
at org.apache.spark.util.Utils$.tryOrStopSparkContext(Utils.scala:1181)
at org.apache.spark.util.AsynchronousListenerBus$$anon$1.run(AsynchronousListenerBus.scala:70)
Last time it ran, it crashed after 233 iterations.
The line it crashed on was this:
logger.info("Number of coordinates with data: " + data1DF.select("longitude","latitude").distinct().count());
Can anyone please tell me what can be the reason for the eventual crashes?
I'm not sure that everyone will find this solution viable, but upgrading the Spark cluster to 2.2.0 seems to have resolved the issue.
I have ran my application for several days now, and had no crashes yet.
This error occurs when GC takes up over 98% of the total execution time of process. You can monitor the GC time in your Spark Web UI by going to stages tab in http://master:4040.
Try increasing the driver/executor(whichever is generating this error) memory using spark.{driver/executor}.memory by --conf while submitting the spark application.
Another thing to try is to change the garbage collector that the java is using. Read this article for that: https://databricks.com/blog/2015/05/28/tuning-java-garbage-collection-for-spark-applications.html. It very clearly explains why GC overhead error occurs and which garbage collector is best for your application.
I am new to scala and java altogether and trying to run a sample producer code. All it does is, takes some raw products and referrers stored in csv files and uses rnd to generate some random log. Following is my code:
object LogProducer extends App {
//WebLog config
val wlc = Settings.WebLogGen
val Products = scala.io.Source.fromInputStream(getClass.getResourceAsStream("/products.csv")).getLines().toArray
val Referrers = scala.io.Source.fromInputStream(getClass.getResourceAsStream("/referrers.csv")).getLines().toArray
val Visitors = (0 to wlc.visitors).map("Visitors-" + _)
val Pages = (0 to wlc.pages).map("Pages-" + _)
val rnd = new Random()
val filePath = wlc.filePath
val fw = new FileWriter(filePath, true)
//adding randomness to time increments for demo
val incrementTimeEvery = rnd.nextInt(wlc.records - 1) + 1
var timestamp = System.currentTimeMillis()
var adjustedTimestamp = timestamp
for (iteration <- 1 to wlc.records) {
adjustedTimestamp = adjustedTimestamp + ((System.currentTimeMillis() - timestamp) * wlc.timeMultiplier)
timestamp = System.currentTimeMillis()
val action = iteration % (rnd.nextInt(200) + 1) match {
case 0 => "purchase"
case 1 => "add_to_cart"
case _ => "page_view"
}
val referrer = Referrers(rnd.nextInt(Referrers.length - 1))
val prevPage = referrer match {
case "Internal" => Pages(rnd.nextInt(Pages.length - 1))
case _ => ""
}
val visitor = Visitors(rnd.nextInt(Visitors.length - 1))
val page = Pages(rnd.nextInt(Pages.length - 1))
val product = Products(rnd.nextInt(Products.length - 1))
val line = s"$adjustedTimestamp\t$referrer\t$action\t$prevPage\t$visitor\t$page\t$product\n"
fw.write(line)
if (iteration % incrementTimeEvery == 0) {
//os.flush()
println(s"Sent $iteration messages!")
val sleeping = rnd.nextInt(incrementTimeEvery * 60)
println(s"Sleeping for $sleeping ms")
}
}
}
It is pretty straightforward where it is basically generating some variables and adding it to the line.
However I am getting a big exception error stack which i am not able to understand:
"C:\Program Files\Java\jdk1.8.0_92\bin\java...
Exception in thread "main" java.nio.charset.MalformedInputException: Input length = 1
at java.nio.charset.CoderResult.throwException(CoderResult.java:281)
at sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:339)
at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:178)
at java.io.InputStreamReader.read(InputStreamReader.java:184)
at java.io.BufferedReader.fill(BufferedReader.java:161)
at java.io.BufferedReader.readLine(BufferedReader.java:324)
at java.io.BufferedReader.readLine(BufferedReader.java:389)
at scala.io.BufferedSource$BufferedLineIterator.hasNext(BufferedSource.scala:70)
at scala.collection.Iterator.foreach(Iterator.scala:929)
at scala.collection.Iterator.foreach$(Iterator.scala:929)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1417)
at scala.collection.generic.Growable.$plus$plus$eq(Growable.scala:59)
at scala.collection.generic.Growable.$plus$plus$eq$(Growable.scala:50)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:104)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:48)
at scala.collection.TraversableOnce.to(TraversableOnce.scala:310)
at scala.collection.TraversableOnce.to$(TraversableOnce.scala:308)
at scala.collection.AbstractIterator.to(Iterator.scala:1417)
at scala.collection.TraversableOnce.toBuffer(TraversableOnce.scala:302)
at scala.collection.TraversableOnce.toBuffer$(TraversableOnce.scala:302)
at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1417)
at scala.collection.TraversableOnce.toArray(TraversableOnce.scala:289)
at scala.collection.TraversableOnce.toArray$(TraversableOnce.scala:283)
at scala.collection.AbstractIterator.toArray(Iterator.scala:1417)
at clickstream.LogProducer$.delayedEndpoint$clickstream$LogProducer$1(logProducer.scala:16)
at clickstream.LogProducer$delayedInit$body.apply(logProducer.scala:12)
at scala.Function0.apply$mcV$sp(Function0.scala:34)
at scala.Function0.apply$mcV$sp$(Function0.scala:34)
at scala.runtime.AbstractFunction0.apply$mcV$sp(AbstractFunction0.scala:12)
at scala.App.$anonfun$main$1$adapted(App.scala:76)
at scala.collection.immutable.List.foreach(List.scala:389)
at scala.App.main(App.scala:76)
at scala.App.main$(App.scala:74)
at clickstream.LogProducer$.main(logProducer.scala:12)
at clickstream.LogProducer.main(logProducer.scala)
Process finished with exit code 1
Can someone please help me identify what the exception mean? Thanks all
So it wasnt hard.. it was my amateurish knowledge. It was a simple IO exception where Intellij wasnt able to get the values from my csv file. When i imported it into resources root directory, it gave me a warning message of wrong encoding.
The error was at this point:
val Products = scala.io.Source.fromInputStream(getClass.getResourceAsStream("/products.csv")).getLines().toArray
thanks for efforts though
It was an encoding issue, for Scala a quick fix would be:
replace:
val Products=scala.io.Source.fromInputStream(getClass.getResourceAsStream("/products.csv")).getLines().toArray
val Referrers = scala.io.Source.fromInputStream(getClass.getResourceAsStream("/referrers.csv")).getLines().toArray
using this:
val Products=scala.io.Source.fromInputStream(getClass.getResourceAsStream("/products.csv"))("UTF-8").getLines().toArray
val Referrers = scala.io.Source.fromInputStream(getClass.getResourceAsStream("/referrers.csv"))("UTF-8").getLines().toArray
For java and more details please check out this link: http://biercoff.com/malformedinputexception-input-length-1-exception-solution-for-scala-and-java/
I am trying to write a scala program to generate an output file omega0_Real.txt which contains pre-calculated values of the cosine function for inputs ranging from 0 to pi/2 radians. Each of these calculated values are 72 bit long and are stored in hex format. The code I have written so far is as follows:
import java.io._
import scala.math._
object omega0_Real {
def main (args: Array[String]) {
val arg = (0.0).to(2-pow(2, -10), pow(2, -10))
val cosArg = arg.map (i => cos(i))
val cosBit = cosArg.map (i => List.tabulate(72)(j = (BigDecimal((i*pow(2,j))).toBigInt % 2)))
val cosStr = cosBit.map (i => i mkString)
val cosBig = cosStr.map (i => BigInt(i, 2))
val cosBigStr = cosBig.map (i => i.toString(16))
val cosList = cosBigStr.toList
val file = "omega0_Real.txt"
val writer = new BufferedWriter(new OutputStreamWriter(new FileOutputStream(file)))
for (x <- cosList) {
writer.write(x + "\n")
}
writer.close()
}
which gives the error: java.lang.NumberFormatException: Illegal embedded sign character followed by many others. Please help me debug this code.
PS: I ran this code line-by-line on sbt console but it did not give any error, although the values generated were erroneous.