NullPointerException in HDFSStore.getMeta() - java

I am struck on a part
I was trying to execute an example code https://github.com/stormprocessor/storm-state/blob/master/src/jvm/storm/state/example/MapExample.java code of github.com/stormprocessor/storm-state. It uses HDFS.
but it is giving an NullPointerException as
java.lang.NullPointerException
at storm.state.hdfs.HDFSStore.getMeta(HDFSStore.java:37)
at storm.state.PartitionedState.getState(PartitionedState.java:11)
at storm.state.bolt.StatefulBoltExecutor.prepare(StatefulBoltExecutor.java:36)
at backtype.storm.daemon.executor$fn__4052$fn__4061.invoke(executor.clj:610)
at backtype.storm.util$async_loop$fn__465.invoke(util.clj:375)
at clojure.lang.AFn.run(AFn.java:24)
in code of above link I have changed
builder.setBolt("counter", new StatefulBoltExecutor(new WordCount(), new HDFSStore("hdfs://ip-10-202-7-99.ec2.internal:8020/tmp/data")), 8)
.fieldsGrouping("spout", new Fields("word"));
to
builder.setBolt("counter", new StatefulBoltExecutor(new WordCount(), new HDFSStore("hdfs://localhost:9000/home/mohit/hadoop/tmp/dfs/data")), 8)
which is my HDFS path.
Codes in error log are present at
https://github.com/stormprocessor/storm-state/blob/master/src/jvm/storm/state
Sorry for very less links, as I am a learning student with very less reputation,
Please Help, Thanks in advance!!

I think the error you are getting is because your namenode is not listening to port 9000 which you have configured in your given code. Try to verify the value of this property fs.default.name in your core-site.xml file and check which port your namenode is using. I think it could be 8020. If that is say 8020 then you code will be something as below.
builder.setBolt("counter", new StatefulBoltExecutor(new WordCount(), new HDFSStore("hdfs://localhost:8020/home/mohit/hadoop/tmp/dfs/data")), 8)
I hope this may solve your problem

Related

apache PIG with datafu: Cannot resolve UDF's

I'm trying the quickstart from here: http://datafu.incubator.apache.org/docs/datafu/getting-started.html
I tried nearly everything, but I'm sure it must be my fault somewhere. I tried already:
exporting PIG_HOME, CLASSPATH, PIG_CLASSPATH
starting pig with -cpdatafu-pig-incubating-1.3.0.jar
registering datafu-pig-incubating-1.3.0.jar locally and in hdfs => both succesful (at least no error shown)
nothing helped
Trying this on pig:
register datafu-pig-incubating-1.3.0.jar
DEFINE Median datafu.pig.stats.StreamingMedian();
data = load '/user/hduser/numbers.txt' using PigStorage() as (val:int);
data2 = FOREACH (GROUP data ALL) GENERATE Median(data);
or directly
data2 = FOREACH (GROUP data ALL) GENERATE datafu.pig.stats.StreamingMedian(data);
I get this name-resolve error:
2016-06-04 17:22:22,734 [main] ERROR org.apache.pig.tools.grunt.Grunt
- ERROR 1070: Could not resolve datafu.pig.stats.StreamingMedian using imports: [, java.lang., org.apache.pig.builtin.,
org.apache.pig.impl.builtin.] Details at logfile:
/home/hadoop/pig_1465053680252.log
When I look into the datafu-pig-incubating-1.3.0.jar it looks OK, everything in place. I also tried some Bag functions, same error then.
I think it's kind of a noob-error which I just don't see (as I did not find particular answers for datafu in SO or google), so thanks in advance for shedding some light on this.
Pig script is proper, the only thing that could break is that while registering datafu there were some class dependencies that coudn't been met.
Try to run locally (pig -x local) and see a detailed log.
Check also the version of pig - it should be newer than 0.14.0.

Compress map output result exception in hadoop program

In Hadoop program, I tried to compress the map result, I wrote the following code:
conf.setBoolean("mapred.compress.map.output",true);
conf.setClass("mapred.map.output.compression.codec",GzipCodec.class,CompressionCodec.class);
and run it, I got the below exception, anybody know the reason?
WARN mapred.LocalJobRunner: job_local1149103367_0001
java.io.IOException: not a gzip file
at org.apache.hadoop.io.compress.zlib.BuiltInGzipDecompressor.processBasicHeader(BuiltInGzipDecompressor.java:495)
at org.apache.hadoop.io.compress.zlib.BuiltInGzipDecompressor.executeHeaderState(BuiltInGzipDecompressor.java:256)
at org.apache.hadoop.io.compress.zlib.BuiltInGzipDecompressor.decompress(BuiltInGzipDecompressor.java:185)
at org.apache.hadoop.io.compress.DecompressorStream.decompress(DecompressorStream.java:91)
at org.apache.hadoop.io.compress.DecompressorStream.read(DecompressorStream.java:85)
at org.apache.hadoop.io.compress.DecompressorStream.read(DecompressorStream.java:72)
at java.io.DataInputStream.readByte(DataInputStream.java:265)
at org.apache.hadoop.io.WritableUtils.readVLong(WritableUtils.java:308)
at org.apache.hadoop.io.WritableUtils.readVInt(WritableUtils.java:329)
at org.apache.hadoop.mapred.IFile$Reader.positionToNextRecord(IFile.java:400)
at org.apache.hadoop.mapred.IFile$Reader.nextRawKey(IFile.java:425)
at org.apache.hadoop.mapred.Merger$Segment.nextRawKey(Merger.java:323)
at org.apache.hadoop.mapred.Merger$MergeQueue.merge(Merger.java:613)
at org.apache.hadoop.mapred.Merger$MergeQueue.merge(Merger.java:558)
at org.apache.hadoop.mapred.Merger.merge(Merger.java:70)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:385)
at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:445)
today, I tested it again, I found that if the put the 2 lines before the job object was created,
Job job = new Job(conf, "MyCounter");
the error will happen, if after that, no error will occur, why this happen?
are you using MRv1 or MRv2. If you are using MRv2 then use the following job config.
config.setBoolean("mapreduce.output.fileoutputformat.compress", true);
config.setClass("mapreduce.output.fileoutputformat.compress.codec",GzipCodec.class,CompressionCodec.class);
additionally you can set
config.set("mapreduce.output.fileoutputformat.compress.type",CompressionType.NONE.toString());
BLOCK|NONE|RECORD are three types of compression.

H2 - General error: "java.lang.NullPointerException" [50000-182]

I have a quite big (>2.5 GB) h2 database file. Driver version is 1.4.182. Everything worked fine but recently the DB stop to work with exception:
Błąd ogólny: "java.lang.NullPointerException"
General error: "java.lang.NullPointerException" [50000-182] HY000/50000 (Help)
org.h2.jdbc.JdbcSQLException: Błąd ogólny: "java.lang.NullPointerException"
General error: "java.lang.NullPointerException" [50000-182]
at org.h2.message.DbException.getJdbcSQLException(DbException.java:345)
at org.h2.message.DbException.get(DbException.java:168)
at org.h2.message.DbException.convert(DbException.java:295)
at org.h2.engine.Database.openDatabase(Database.java:297)
at org.h2.engine.Database.<init>(Database.java:260)
at org.h2.engine.Engine.openSession(Engine.java:60)
at org.h2.engine.Engine.openSession(Engine.java:167)
at org.h2.engine.Engine.createSessionAndValidate(Engine.java:145)
at org.h2.engine.Engine.createSession(Engine.java:128)
at org.h2.engine.Engine.createSession(Engine.java:26)
at org.h2.engine.SessionRemote.connectEmbeddedOrServer(SessionRemote.java:347)
at org.h2.jdbc.JdbcConnection.<init>(JdbcConnection.java:108)
at org.h2.jdbc.JdbcConnection.<init>(JdbcConnection.java:92)
at org.h2.Driver.connect(Driver.java:72)
at org.h2.server.web.WebServer.getConnection(WebServer.java:750)
at org.h2.server.web.WebApp.test(WebApp.java:895)
at org.h2.server.web.WebApp.process(WebApp.java:221)
at org.h2.server.web.WebApp.processRequest(WebApp.java:170)
at org.h2.server.web.WebThread.process(WebThread.java:137)
at org.h2.server.web.WebThread.run(WebThread.java:93)
at java.lang.Thread.run(Thread.java:722)
Caused by: java.lang.NullPointerException
at org.h2.mvstore.db.ValueDataType.compare(ValueDataType.java:102)
at org.h2.mvstore.MVMap.compare(MVMap.java:741)
at org.h2.mvstore.Page.binarySearch(Page.java:388)
at org.h2.mvstore.MVMap.put(MVMap.java:179)
at org.h2.mvstore.MVMap.put(MVMap.java:133)
at org.h2.mvstore.db.TransactionStore.rollbackTo(TransactionStore.java:491)
at org.h2.mvstore.db.TransactionStore$Transaction.rollback(TransactionStore.java:785)
at org.h2.mvstore.db.MVTableEngine$Store.initTransactions(MVTableEngine.java:223)
at org.h2.engine.Database.open(Database.java:736)
at org.h2.engine.Database.openDatabase(Database.java:266)
... 17 more
The problem occurs in my application and using H2 web frontend.
I have tried solution from similar question but I cannot downgrade H2 to 1.3.x as it cannot read 1.4.x DB files.
My questions are:
How to handle it? Is it to possible to make it work again? I have tried downgrade H2 to 1.4.177 but it didn't help.
Is there any way to at least recover data to other format? I could use other DB (Sqlite, etc.) however I would need a way to get to these data.
EDIT: updated stacktrace
EDIT 2: Result of using Recovery tool:
$ java -cp h2-1.4.182.jar org.h2.tools.Recover
Exception in thread "main" java.lang.IllegalStateException: Unknown tag 50 [1.4.182/6]
at org.h2.mvstore.DataUtils.newIllegalStateException(DataUtils.java:762)
at org.h2.mvstore.type.ObjectDataType.read(ObjectDataType.java:222)
at org.h2.mvstore.db.TransactionStore$ArrayType.read(TransactionStore.java:1792)
at org.h2.mvstore.db.TransactionStore$ArrayType.read(TransactionStore.java:1759)
at org.h2.mvstore.Page.read(Page.java:843)
at org.h2.mvstore.Page.read(Page.java:230)
at org.h2.mvstore.MVStore.readPage(MVStore.java:1813)
at org.h2.mvstore.MVMap.readPage(MVMap.java:769)
at org.h2.mvstore.Page.getChildPage(Page.java:252)
at org.h2.mvstore.MVMap.getFirstLast(MVMap.java:351)
at org.h2.mvstore.MVMap.firstKey(MVMap.java:218)
at org.h2.mvstore.db.TransactionStore.init(TransactionStore.java:169)
at org.h2.mvstore.db.TransactionStore.<init>(TransactionStore.java:117)
at org.h2.mvstore.db.TransactionStore.<init>(TransactionStore.java:81)
at org.h2.tools.Recover.dumpMVStoreFile(Recover.java:593)
at org.h2.tools.Recover.process(Recover.java:331)
at org.h2.tools.Recover.runTool(Recover.java:192)
at org.h2.tools.Recover.main(Recover.java:155)
I also noticed that two another files (.txt and .sql) has been created but they don't seem to contain a data.
I got the same situation last week with JIRA database, it took me few hours to do google regarding to this problem however, there are no answers which can resolve the situation.
I decided to take a look at H2 source code and I can recover the whole database with very simple code. I may not understand whole picture about the situation e.g. root cause, in which condition it happened, etc.
However, the cause is: when you connect to h2 file then H2 engine will look into auditLog and rollback the in-progress transactions, there are some data which have unknown type (id is 17) and H2 fails to rollback due to exception during recognize the type (id 17).
My code is simple, add h2 lib into your build path then just manual connect to file and clear auditLog because I think it is just the log and will not have big impact (someone may correct me).
Hopefully, you can resolve your problem as well.
public static void main(final String[] args) {
// open the store (in-memory if fileName is null)
final MVStore store = MVStore.open("C:\\temp\\h2db.mv.db");
final MVMap<Object, Object> openMap = store.openMap("undoLog");
openMap.clear();
// close the store (this will persist changes)
store.close();
}
Another solution for this problem:
Go to your home folder (~ in linux).
Move all files named [*.mv.db] to a backup with a different name. For example: mv xyz.mv.db xyz.mv.db.backup
Restart your database.
This seems to clear out MVStore meta-data used for H2 undo features, and resolves the NPE from the MV Store compare.
I had a similar problem:
[HY000][50000] Allgemeiner Fehler: "java.lang.NullPointerException"
General error: "java.lang.NullPointerException" [50000-176]
java.lang.NullPointerException
I tried to connect with IntelliJ to H2 file DB. H2 driver version was 1.3.176 but the DB file version had 1.3.161. So downgraded the driver to 1.3.161 in IntelliJ solved the problem completely.

atomchace function in biojava

Hi everyone I'm fairly new to biojava and trying to implement this piece of code:
AtomCache cache = new AtomCache();
cache.setPath("/tmp/");
FileParsingParameters params = cache.getFileParsingParams();
params.setLoadChemCompInfo(true);
StructureIO.setAtomCache(cache);
Structure strucuture = StructureIO.getStructure("4HHB");
after executing these lines im getting the following error message:
Exception in thread "main" java.lang.NoSuchFieldError: lineSplit
at org.biojava.bio.structure.align.util.UserConfiguration.(UserConfiguration.java:87)
at org.biojava.bio.structure.align.util.AtomCache.(AtomCache.java:115)
at protein_structure.main(protein_structure.java:27)
Java Result: 1
I cant figure out the reason for this error, I downloaded the pdb files for the proteins that Im working with (in this case "4HHB" in the /tmp/ directory but still the same error is showing up. can anyone tell me how Atomcache function works? Thanks

Error writing xlsx in R: Could not initialize class sun.java2d.Disposer

I'm using the xlsx package to write Excel files in R:
addPicture('trend_indirect.png' ,sheet1)
addDataFrame(df.ssis_duplmonth ,sheet1, startRow=22)
addDataFrame(df.ssis_dupltrans ,sheet1, startRow=35)
addDataFrame(df.ssis_duplmonth_dir, sheet2, startRow=22)
addDataFrame(df.ssis_dupltrans_dir, sheet2, startRow=55)
saveWorkbook(wb, file="SSIS_import_controls.xlsx")
At this point I get the following error:
> addDataFrame(df.ssis_duplmonth ,sheet1, startRow=22)
Error in .jcall("RJavaTools", "Z", "hasField", .jcast(x, "java/lang/Object"), :
java.lang.NoClassDefFoundError: Could not initialize class sun.java2d.Disposer
R version 2.15.2, 32bit.
Thanks
Edit: I can't really make it reproducible as probably the issue is in my settings but I get the error when I run this:
library('xlsx')
df.test <- iris[1:5, ]
wb <- createWorkbook()
sheet1 <- createSheet(wb, 'Indirect Sales')
addPicture('trend_indirect.png' ,sheet1)
addDataFrame(df.test ,sheet1, startRow=22)
saveWorkbook(wb, file="stack_test.xlsx")
The image is just a simple ggplot graph saved in png. Thanks
Try installing libxtst. That solved a similar problem for me.
I also installed fontconfig and libcups in the course of solving my issue, in case it wasn't libxtst that fixed it.
I was with the same exception but running a Java program using Ubuntu 12.
I've installed libxtst6 and add this java parameter to my JAVA_OPTS variable: -Djava.awt.headless=true
Then it works fine.

Categories

Resources