Regions In Transition в java application - java

Please tell me what's wrong, I'm new to working with hbase. when creating regions in a java application for hbase, an error occurs below then.
Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.client.DoNotRetryRegionException): org.apache.hadoop.hbase.client.DoNotRetryRegionException: bc3ec95b447809887e3c198afe4d1084 is not OPEN; regionState={bc3ec95b447809887e3c198afe4d1084 state=CLOSING, ts=1651224527248, server=hbase-docker,16020,1651207907804}
Code as below:
byte[][] splits = getSplits(countSplits, countSlot);
for (byte[] byteSplit : splits) {
byte[] regionName = admin.getRegions(tableName).get(admin.getRegions(tableName).size() - 1).getRegionName();
admin.splitRegionAsync(regionName, byteSplit);
}
This code is executed and 1 region out of 20 required is created. After creating the first one, the error above occurs. What needs to be added? I hope for any help

The problem is solved by adding a waiting time between operations.
.....
admin.splitRegionAsync(regionName, byteSplit);
Thread.sleep(30000);
.....

Related

JVM Error While Writing Data Frame to Oracle Database using parLapply

I want to parallelize my data writing process. I am writing a data frame to Oracle Database. This data has 4 million rows and 8 columns. It takes 6.5 hours without parallelizing.
When I try to go parallel, I get the error
Error in checkForRemoteErrors(val) :
7 nodes produced errors; first error: No running JVM detected. Maybe .jinit() would help.
I know this error. I can solve it when I work with single cluster. But I do not know how to tell other clusters the location of Java. Here is my code
Sys.setenv(JAVA_HOME='C:/Program Files/Java/jre1.8.0_181')
library(rJava)
library(RJDBC)
library(DBI)
library(compiler)
library(dplyr)
library(data.table)
jdbcDriver =JDBC("oracle.jdbc.OracleDriver",classPath="C:/Program Files/directory/ojdbc6.jar", identifier.quote = "\"")
jdbcConnection =dbConnect(jdbcDriver, "jdbc:oracle:thin:#//XXXXX", "YYYYY", "ZZZZZ")
By using Sys.setenv(JAVA_HOME='C:/Program Files/Java/jre1.8.0_181') I solve the same problem for single core. But when I go parallel
library(parallel)
no_cores <- detectCores() - 1
cl <- makeCluster(no_cores)
clusterExport(cl, varlist = list("jdbcConnection", "brand3.merge.u"))
clusterEvalQ(cl, .libPaths("C:/Users/onur.boyar/Documents/R/win-library/3.5"))
clusterEvalQ(cl, library(RJDBC))
clusterEvalQ(cl, library(rJava))
parLapply(cl, 1:length(brand3.merge.u$CELL_PH_NUM), function(x) dbSendUpdate(jdbcConnection, "INSERT INTO xxnvdw.an_cust_analytics VALUES(?,?,?,?,?,?,?,?)", brand3.merge.u[x, 1], brand3.merge.u[x,2], brand3.merge.u[x,3],brand3.merge.u[x,4],brand3.merge.u[x,5],brand3.merge.u[x,6],brand3.merge.u[x,7],brand3.merge.u[x,8]))
#brand3.merge.u is my data frame that I try to write.
I get the above error and I do not know how to set my Java location for other nodes.
I want to use parLapply since it is faster than foreach. Any help would be appreciated. Thanks!
JAVA_HOME environment variable
If the problem really is with the location of Java, you could set the environment variable in your .Renviron file. It is likely located in ~/.Renviron. Add a line to that file and this will be propagated to all R session that run via your user:
JAVA_HOME='C:/Program Files/Java/jre1.8.0_181'
Alternatively, you can just add that location to your PATH environment variable.
JVM Initialization via rJava
On the other hand the error message may point to just a JVM not being initialized, which you can solve with .jinit, a minimal example:
library(parallel)
cl <- makeCluster(detectCores())
parallel::parLapply(cl, 1:5, function(x) {
rJava::.jinit()
rJava::.jnew(class = "java/lang/Integer", x)$toString()
})
Working around Java use
This was not specifically asked, but you can also work around the need for Java dependency using ODBC drivers, which for Oracle should be accessible here:
con <- DBI::dbConnect(
odbc::odbc(),
Driver = "[your driver's name]",
...
)

Java Weka prediction throws Exception in thread "main" java.lang.IndexOutOfBoundsException: Index: 22, Size: 22

I am trying to train a classifier on labeled data (data with the outcome vector included) and run predictions on unlabeled data using the Weka library in Java.
I've investigated every case I can find online of someone receiving the error
Exception in thread "main" java.lang.IndexOutOfBoundsException
when doing this and they seem to all be caused by either
a mis-match between the training and prediction data structures or
improperly handled missing values
The solution to the first cause is to make the data structures match and verify this with train.equalHeaders(test)). As far as I can tell the data structures are an exact match and the result of equalHeaders is true. There is no missing data in the data sets I've been using for development / testing.
My training data is the famous iris data set, which I produced by using the copy that's built into R by calling data(iris); write.csv(iris, "iris.csv", row.names = F). My prediction (test) data is the exact same data set, with the last column removed to simulate unlabeled test data. I've tried reading these files as .csv and from SQL Server tables and have encountered the same result.
I have tried 2 different ways of running the predictions; the way that's currently uncommented and the .evaluateModel method, both of which have the same error.
I have also tried changing the algorithm but this does not affect the error.
I have also printed the data to the screen and examined all of the available summary / diagnostic methods, all of which look as they should be.
The key part of my code is as follows. Originally I posted the entire code, so if you'd like to see that it's available in the edit history.
//Add dummy outcome attribute to make shape match training
Add filter1;
filter1 = new Add();
filter1.setAttributeIndex("last");
filter1.setNominalLabels("'\"setosa\"','\"versicolor\"','\"virginica\"'");
filter1.setAttributeName("\"Species\"");
filter1.setInputFormat(test);
test = Filter.useFilter(test, filter1);
Instances newTest = Filter.useFilter(test, filter); // create new test set
// set class attribute
newTest.setClassIndex(newTest.numAttributes() - 1);
// create copy
Instances labeled = new Instances(newTest);
System.out.println("check headers: " + newTrain.equalHeaders(newTest));
System.out.println(newTest); // throws the error if included
// label instances
for (int i = 0; i < newTest.numInstances(); i++) { // properly indexed
System.out.println(i);
double clsLabel = rf.classifyInstance(newTest.instance(i)); //throws the error if the earlier print is not included
labeled.instance(i).setClassValue(clsLabel);
}
}
}
The full error is:
Exception in thread "main" java.lang.IndexOutOfBoundsException: Index: 22, Size: 22
java.util.ArrayList.rangeCheck(ArrayList.java:653)
java.util.ArrayList.get(ArrayList.java:429)
weka.core.Attribute.value(Attribute.java:735)
weka.core.AbstractInstance.stringValue(AbstractInstance.java:668)
weka.core.AbstractInstance.stringValue(AbstractInstance.java:644)
weka.core.AbstractInstance.toString(AbstractInstance.java:756)
weka.core.DenseInstance.toStringNoWeight(DenseInstance.java:330)
weka.core.AbstractInstance.toStringMaxDecimalDigits(AbstractInstance.java:692)
weka.core.AbstractInstance.toString(AbstractInstance.java:712)
java.lang.String.valueOf(String.java:2981)
java.lang.StringBuffer.append(StringBuffer.java:265)
weka.core.Instances.stringWithoutHeader(Instances.java:1734)
weka.core.Instances.toString(Instances.java:1718)
java.lang.String.valueOf(String.java:2981)
java.io.PrintStream.println(PrintStream.java:821)
weka.core.Tee.println(Tee.java:484)
myWeka.myWeka.main(myWeka.java:262)
at java.util.ArrayList.rangeCheck(ArrayList.java:653)
at java.util.ArrayList.get(ArrayList.java:429)
at weka.core.Attribute.value(Attribute.java:735)
at weka.core.AbstractInstance.stringValue(AbstractInstance.java:668)
at weka.core.AbstractInstance.stringValue(AbstractInstance.java:644)
at weka.core.AbstractInstance.toString(AbstractInstance.java:756)
at weka.core.DenseInstance.toStringNoWeight(DenseInstance.java:330)
at weka.core.AbstractInstance.toStringMaxDecimalDigits(AbstractInstance.java:692)
at weka.core.AbstractInstance.toString(AbstractInstance.java:712)
at java.lang.String.valueOf(String.java:2981)
at java.lang.StringBuffer.append(StringBuffer.java:265)
at weka.core.Instances.stringWithoutHeader(Instances.java:1734)
at weka.core.Instances.toString(Instances.java:1718)
at java.lang.String.valueOf(String.java:2981)
at java.io.PrintStream.println(PrintStream.java:821)
at weka.core.Tee.println(Tee.java:484)
at myWeka.myWeka.main(myWeka.java:262)
C:\Users\eggwhite\AppData\Local\NetBeans\Cache\8.1\executor-snippets\run.xml:53:
Java returned: 1
BUILD FAILED (total time: 23 seconds)
The error is thrown by double clsLabel = rf.classifyInstance(newTest.instance(i)); unless I include the line System.out.println(newTest); for diagnostic purposes, in which case the same error is thrown by that line.

H2 - General error: "java.lang.NullPointerException" [50000-182]

I have a quite big (>2.5 GB) h2 database file. Driver version is 1.4.182. Everything worked fine but recently the DB stop to work with exception:
Błąd ogólny: "java.lang.NullPointerException"
General error: "java.lang.NullPointerException" [50000-182] HY000/50000 (Help)
org.h2.jdbc.JdbcSQLException: Błąd ogólny: "java.lang.NullPointerException"
General error: "java.lang.NullPointerException" [50000-182]
at org.h2.message.DbException.getJdbcSQLException(DbException.java:345)
at org.h2.message.DbException.get(DbException.java:168)
at org.h2.message.DbException.convert(DbException.java:295)
at org.h2.engine.Database.openDatabase(Database.java:297)
at org.h2.engine.Database.<init>(Database.java:260)
at org.h2.engine.Engine.openSession(Engine.java:60)
at org.h2.engine.Engine.openSession(Engine.java:167)
at org.h2.engine.Engine.createSessionAndValidate(Engine.java:145)
at org.h2.engine.Engine.createSession(Engine.java:128)
at org.h2.engine.Engine.createSession(Engine.java:26)
at org.h2.engine.SessionRemote.connectEmbeddedOrServer(SessionRemote.java:347)
at org.h2.jdbc.JdbcConnection.<init>(JdbcConnection.java:108)
at org.h2.jdbc.JdbcConnection.<init>(JdbcConnection.java:92)
at org.h2.Driver.connect(Driver.java:72)
at org.h2.server.web.WebServer.getConnection(WebServer.java:750)
at org.h2.server.web.WebApp.test(WebApp.java:895)
at org.h2.server.web.WebApp.process(WebApp.java:221)
at org.h2.server.web.WebApp.processRequest(WebApp.java:170)
at org.h2.server.web.WebThread.process(WebThread.java:137)
at org.h2.server.web.WebThread.run(WebThread.java:93)
at java.lang.Thread.run(Thread.java:722)
Caused by: java.lang.NullPointerException
at org.h2.mvstore.db.ValueDataType.compare(ValueDataType.java:102)
at org.h2.mvstore.MVMap.compare(MVMap.java:741)
at org.h2.mvstore.Page.binarySearch(Page.java:388)
at org.h2.mvstore.MVMap.put(MVMap.java:179)
at org.h2.mvstore.MVMap.put(MVMap.java:133)
at org.h2.mvstore.db.TransactionStore.rollbackTo(TransactionStore.java:491)
at org.h2.mvstore.db.TransactionStore$Transaction.rollback(TransactionStore.java:785)
at org.h2.mvstore.db.MVTableEngine$Store.initTransactions(MVTableEngine.java:223)
at org.h2.engine.Database.open(Database.java:736)
at org.h2.engine.Database.openDatabase(Database.java:266)
... 17 more
The problem occurs in my application and using H2 web frontend.
I have tried solution from similar question but I cannot downgrade H2 to 1.3.x as it cannot read 1.4.x DB files.
My questions are:
How to handle it? Is it to possible to make it work again? I have tried downgrade H2 to 1.4.177 but it didn't help.
Is there any way to at least recover data to other format? I could use other DB (Sqlite, etc.) however I would need a way to get to these data.
EDIT: updated stacktrace
EDIT 2: Result of using Recovery tool:
$ java -cp h2-1.4.182.jar org.h2.tools.Recover
Exception in thread "main" java.lang.IllegalStateException: Unknown tag 50 [1.4.182/6]
at org.h2.mvstore.DataUtils.newIllegalStateException(DataUtils.java:762)
at org.h2.mvstore.type.ObjectDataType.read(ObjectDataType.java:222)
at org.h2.mvstore.db.TransactionStore$ArrayType.read(TransactionStore.java:1792)
at org.h2.mvstore.db.TransactionStore$ArrayType.read(TransactionStore.java:1759)
at org.h2.mvstore.Page.read(Page.java:843)
at org.h2.mvstore.Page.read(Page.java:230)
at org.h2.mvstore.MVStore.readPage(MVStore.java:1813)
at org.h2.mvstore.MVMap.readPage(MVMap.java:769)
at org.h2.mvstore.Page.getChildPage(Page.java:252)
at org.h2.mvstore.MVMap.getFirstLast(MVMap.java:351)
at org.h2.mvstore.MVMap.firstKey(MVMap.java:218)
at org.h2.mvstore.db.TransactionStore.init(TransactionStore.java:169)
at org.h2.mvstore.db.TransactionStore.<init>(TransactionStore.java:117)
at org.h2.mvstore.db.TransactionStore.<init>(TransactionStore.java:81)
at org.h2.tools.Recover.dumpMVStoreFile(Recover.java:593)
at org.h2.tools.Recover.process(Recover.java:331)
at org.h2.tools.Recover.runTool(Recover.java:192)
at org.h2.tools.Recover.main(Recover.java:155)
I also noticed that two another files (.txt and .sql) has been created but they don't seem to contain a data.
I got the same situation last week with JIRA database, it took me few hours to do google regarding to this problem however, there are no answers which can resolve the situation.
I decided to take a look at H2 source code and I can recover the whole database with very simple code. I may not understand whole picture about the situation e.g. root cause, in which condition it happened, etc.
However, the cause is: when you connect to h2 file then H2 engine will look into auditLog and rollback the in-progress transactions, there are some data which have unknown type (id is 17) and H2 fails to rollback due to exception during recognize the type (id 17).
My code is simple, add h2 lib into your build path then just manual connect to file and clear auditLog because I think it is just the log and will not have big impact (someone may correct me).
Hopefully, you can resolve your problem as well.
public static void main(final String[] args) {
// open the store (in-memory if fileName is null)
final MVStore store = MVStore.open("C:\\temp\\h2db.mv.db");
final MVMap<Object, Object> openMap = store.openMap("undoLog");
openMap.clear();
// close the store (this will persist changes)
store.close();
}
Another solution for this problem:
Go to your home folder (~ in linux).
Move all files named [*.mv.db] to a backup with a different name. For example: mv xyz.mv.db xyz.mv.db.backup
Restart your database.
This seems to clear out MVStore meta-data used for H2 undo features, and resolves the NPE from the MV Store compare.
I had a similar problem:
[HY000][50000] Allgemeiner Fehler: "java.lang.NullPointerException"
General error: "java.lang.NullPointerException" [50000-176]
java.lang.NullPointerException
I tried to connect with IntelliJ to H2 file DB. H2 driver version was 1.3.176 but the DB file version had 1.3.161. So downgraded the driver to 1.3.161 in IntelliJ solved the problem completely.

atomchace function in biojava

Hi everyone I'm fairly new to biojava and trying to implement this piece of code:
AtomCache cache = new AtomCache();
cache.setPath("/tmp/");
FileParsingParameters params = cache.getFileParsingParams();
params.setLoadChemCompInfo(true);
StructureIO.setAtomCache(cache);
Structure strucuture = StructureIO.getStructure("4HHB");
after executing these lines im getting the following error message:
Exception in thread "main" java.lang.NoSuchFieldError: lineSplit
at org.biojava.bio.structure.align.util.UserConfiguration.(UserConfiguration.java:87)
at org.biojava.bio.structure.align.util.AtomCache.(AtomCache.java:115)
at protein_structure.main(protein_structure.java:27)
Java Result: 1
I cant figure out the reason for this error, I downloaded the pdb files for the proteins that Im working with (in this case "4HHB" in the /tmp/ directory but still the same error is showing up. can anyone tell me how Atomcache function works? Thanks

Xuggler live streaming delay and high cpu usage

I'm currently using Xuggler to receive the video stream of an AR.Drone. The stream format is H.264 720p. I can decode and display the video using the following code, but the processor usage is very high (100% on dual-core 2ghz) and there is a huge delay in the stream that keeps increasing.
final IMediaReader reader = ToolFactory.makeReader("http://192.168.1.1:5555");
reader.setBufferedImageTypeToGenerate(BufferedImage.TYPE_3BYTE_BGR);
MediaListenerAdapter adapter = new MediaListenerAdapter()
{
public void onVideoPicture(IVideoPictureEvent e)
{
currentframe = e.getImage();
//Draw frame
}
public void onOpenCoder(IOpenCoderEvent e) {
videostreamopened = true;
}
};
reader.addListener(adapter);
while (!stop) {
try {
reader.readPacket();
} catch(RuntimeException re) {
// Errors happen relatively often
}
}
Using the Xuggler sample application resolves none of the problems, so I think my approach is correct. Also, when I decrease the resolution to 360p the stream is real-time and everything works OK. Does anybody know if this performance issues are normal or what I have to do to avoid this? I am very new to this, and I have not been able to find information, so does anybody have suggestions?
By the way, I tried changing the bitrate without success. Calling reader.getContainer().getStream(0).getStreamCoder().setBitRate(bitrate); seems to be ignored...
Thanks in advance!
UPDATE:
I get many of these errors:
9593 [Thread-7] ERROR org.ffmpeg - [h264 # 0x7f12d40e53c0] mmco: unref short failure
39593 [Thread-7] ERROR org.ffmpeg - [h264 # 0x7f12d40e53c0] number of reference frames (0+2) exceeds max (1; probably corrupt input), discarding one
39593 [Thread-15] ERROR org.ffmpeg - [h264 # 0x7f12d40e53c0] reference overflow
39593 [Thread-15] ERROR org.ffmpeg - [h264 # 0x7f12d40e53c0] decode_slice_header error
UPDATE 2: Changing the codec solves the above errors, but performance is still poor.

Categories

Resources