The error I am getting is as follows:
" Aug 25, 2015 1:47:41 PM com.orientechnologies.common.log.OLogManager log
INFO: OrientDB auto-config DISKCACHE=4,161MB (heap=1,776MB os=7,985MB disk=416,444MB)
Aug 25, 2015 1:47:41 PM com.orientechnologies.common.log.OLogManager log
WARNING: segment file 'database.ocf' was not closed correctly last time
Exception in thread "main" com.orientechnologies.common.exception.OException: Error on creation
of shared resource
at com.orientechnologies.common.concur.resource.OSharedContainerImpl.getResource(OSharedContainerImpl.java:55)
at com.orientechnologies.orient.core.metadata.OMetadataDefault.init(OMetadataDefault.java:175)
at com.orientechnologies.orient.core.metadata.OMetadataDefault.load(OMetadataDefault.java:77)
at com.orientechnologies.orient.core.db.document.ODatabaseDocumentTx.initAtFirstOpen(ODatabaseDocumentTx.java:2633)
at com.orientechnologies.orient.core.db.document.ODatabaseDocumentTx.open(ODatabaseDocumentTx.java:254)
at arss.db.main(db.java:17)
Caused by: com.orientechnologies.orient.core.exception.ORecordNotFoundException:
The record with id '#0:1' not found
at com.orientechnologies.orient.core.record.ORecordAbstract.reload(ORecordAbstract.java:266)
at com.orientechnologies.orient.core.record.impl.ODocument.reload(ODocument.java:665)
at com.orientechnologies.orient.core.type.ODocumentWrapper.reload(ODocumentWrapper.java:91)
at com.orientechnologies.orient.core.type.ODocumentWrapperNoClass.reload(ODocumentWrapperNoClass.java:73)
at com.orientechnologies.orient.core.metadata.schema.OSchemaShared.load(OSchemaShared.java:786)
at com.orientechnologies.orient.core.metadata.OMetadataDefault$1.call(OMetadataDefault.java:180)
at com.orientechnologies.orient.core.metadata.OMetadataDefault$1.call(OMetadataDefault.java:175)
at com.orientechnologies.common.concur.resource.OSharedContainerImpl.getResource(OSharedContainerImpl.java:53)
... 5 more
Caused by: com.orientechnologies.orient.core.exception.ODatabaseException: Error
on retrieving record #0:1 (cluster: internal)
at com.orientechnologies.orient.core.db.document.ODatabaseDocumentTx.executeReadRecord(ODatabaseDocumentTx.java:1605)
at com.orientechnologies.orient.core.tx.OTransactionNoTx.loadRecord(OTransactionNoTx.java:80)
at com.orientechnologies.orient.core.db.document.ODatabaseDocumentTx.reload(ODatabaseDocumentTx.java:1453)
at com.orientechnologies.orient.core.db.document.ODatabaseDocumentTx.reload(ODatabaseDocumentTx.java:117)
at com.orientechnologies.orient.core.record.ORecordAbstract.reload(ORecordAbstract.java:260)
... 12 more
Caused by: java.lang.NoSuchMethodError: com.orientechnologies.common.concur.lock.ONewLockManager.tryAcquireSharedLock(Ljava/lang/Object;J)Z
at com.orientechnologies.orient.core.storage.impl.local.OAbstractPaginatedStorage.acquireReadLock(OAbstractPaginatedStorage.java:1301)
at com.orientechnologies.orient.core.tx.OTransactionAbstract.lockRecord(OTransactionAbstract.java:120)
at com.orientechnologies.orient.core.id.ORecordId.lock(ORecordId.java:282)
at com.orientechnologies.orient.core.storage.impl.local.OAbstractPaginatedStorage.lockRecord(OAbstractPaginatedStorage.java:1784)
at com.orientechnologies.orient.core.storage.impl.local.OAbstractPaginatedStorage.readRecord(OAbstractPaginatedStorage.java:1424)
at com.orientechnologies.orient.core.storage.impl.local.OAbstractPaginatedStorage.readRecord(OAbstractPaginatedStorage.java:697)
at com.orientechnologies.orient.core.db.document.ODatabaseDocumentTx.executeReadRecord(ODatabaseDocumentTx.java:1572)
... 16 more"
The code that I use is:
package arss;
import com.orientechnologies.orient.core.config.OGlobalConfiguration;
import com.orientechnologies.orient.core.db.document.ODatabaseDocumentTx;
import com.orientechnologies.orient.core.record.impl.ODocument;
import com.orientechnologies.orient.core.serialization.serializer.record.ORecordSerializerFactory;
import com.orientechnologies.orient.core.serialization.serializer.record.binary.ORecordSerializerBinary;
import com.orientechnologies.orient.core.serialization.serializer.record.string.ORecordSerializerSchemaAware2CSV;
public class db {
public static void main(String[] args) {
ODatabaseDocumentTx db = new ODatabaseDocumentTx("plocal:C:/AR/AR/Newfolder/orientdb-community-2.0.3_S/databases/GratefulDeadConcerts").open("admin", "admin");
try {
// CREATE A NEW DOCUMENT AND FILL IT
ODocument doc = new ODocument("Person");
doc.field( "name", "Luke" );
doc.field( "surname", "Skywalker" );
doc.field( "city", new ODocument("City").field("name","Rome").field("country", "Italy") );
// SAVE THE DOCUMENT
doc.save();
db.close();
} finally {
db.close();
}
}
}"
Not sure if you need .flush with ODocuments, you should look that up. (or if save is an equivalent and its okay)
from those two errors lines:
WARNING: segment file 'database.ocf' was not closed correctly last time
and
com.orientechnologies.orient.core.db.document.ODatabaseDocumentTx.executeReadRecord(ODatabaseDocumentTx.java:1572) ... 16 more"
I think it has something to do with the database.ocf file itself. dont know if this helps, but try opening it manually, preferable with/without admin and close it again. ("Have you tried turning it off and on again?")
If there is still an erorr, check if it is a different one.
Related
Currently I get the following output from code run
Caused by: java.lang.NullPointerException
at com.gargoylesoftware.htmlunit.html.HtmlScript$2.execute(HtmlScript.java:227)
at com.gargoylesoftware.htmlunit.html.HtmlScript.onAllChildrenAddedToPage(HtmlScript.java:256)
at com.gargoylesoftware.htmlunit.html.parser.neko.HtmlUnitNekoDOMBuilder.endElement(HtmlUnitNekoDOMBuilder.java:560)
at org.apache.xerces.parsers.AbstractSAXParser.endElement(Unknown Source)
at com.gargoylesoftware.htmlunit.html.parser.neko.HtmlUnitNekoDOMBuilder.endElement(HtmlUnitNekoDOMBuilder.java:514)
at net.sourceforge.htmlunit.cyberneko.HTMLTagBalancer.callEndElement(HTMLTagBalancer.java:1192)
at net.sourceforge.htmlunit.cyberneko.HTMLTagBalancer.endElement(HTMLTagBalancer.java:1132)
at net.sourceforge.htmlunit.cyberneko.filters.DefaultFilter.endElement(DefaultFilter.java:219)
at net.sourceforge.htmlunit.cyberneko.filters.NamespaceBinder.endElement(NamespaceBinder.java:312)
at net.sourceforge.htmlunit.cyberneko.HTMLScanner$ContentScanner.scanEndElement(HTMLScanner.java:3189)
at net.sourceforge.htmlunit.cyberneko.HTMLScanner$ContentScanner.scan(HTMLScanner.java:2114)
at net.sourceforge.htmlunit.cyberneko.HTMLScanner.scanDocument(HTMLScanner.java:937)
at net.sourceforge.htmlunit.cyberneko.HTMLConfiguration.parse(HTMLConfiguration.java:443)
at net.sourceforge.htmlunit.cyberneko.HTMLConfiguration.parse(HTMLConfiguration.java:394)
at org.apache.xerces.parsers.XMLParser.parse(Unknown Source)
at com.gargoylesoftware.htmlunit.html.parser.neko.HtmlUnitNekoDOMBuilder.parse(HtmlUnitNekoDOMBuilder.java:760)
at com.gargoylesoftware.htmlunit.html.parser.neko.HtmlUnitNekoHtmlParser.parseFragment(HtmlUnitNekoHtmlParser.java:158)
at com.gargoylesoftware.htmlunit.html.parser.neko.HtmlUnitNekoHtmlParser.parseFragment(HtmlUnitNekoHtmlParser.java:112)
at com.gargoylesoftware.htmlunit.javascript.host.Element.parseHtmlSnippet(Element.java:868)
at com.gargoylesoftware.htmlunit.javascript.host.Element.setInnerHTML(Element.java:920)
at com.gargoylesoftware.htmlunit.javascript.host.html.HTMLElement.setInnerHTML(HTMLElement.java:676)
at sun.reflect.GeneratedMethodAccessor17.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at net.sourceforge.htmlunit.corejs.javascript.MemberBox.invoke(MemberBox.java:188)
... 30 more
The above ... 30 more means that I do not know the line of my code that causes the problem. How can I get the full stack trace.
Also, if it does not seem to be running any of my catch statements with printStackTrace, but if I turn off logging with the below command
java.util.logging.Logger.getLogger("com.gargoylesoftware.htmlunit").setLevel(java.util.logging.Level.OFF);
java.util.logging.Logger.getLogger("org.apache.http").setLevel(java.util.logging.Level.OFF);
**Have put in my full code below as suggested by the comments below . Did not put in the actual URL though**
import com.gargoylesoftware.htmlunit.html.*;
import java.io.File;
import com.gargoylesoftware.htmlunit.WebClient;
import com.gargoylesoftware.htmlunit.html.HtmlPage;
public class download_to_send_to_stackoverflow
{
public static void main(String args[])
{
String url = "did not want to include the actual web site ";
HtmlPage page = null;
WebClient webClient = new WebClient();
webClient.getOptions().setRedirectEnabled(true);
webClient.getOptions().setJavaScriptEnabled(true); // tried making false - but didnt work
webClient.getOptions().setCssEnabled(false);
webClient.getOptions().setUseInsecureSSL(true);
webClient.getOptions().setTimeout(30000);
webClient.setJavaScriptTimeout(30000); //e.g. 50s
webClient.waitForBackgroundJavaScript(15000) ; // added 2017/1/24
webClient.waitForBackgroundJavaScriptStartingBefore(30000) ; // added 2017/1/24
try
{
page = webClient.getPage( url );
savePage_just_html(page );
}
catch( Exception e )
{
System.out.println( "Exception thrown:----------------------------------------" + e.getMessage() );
e.printStackTrace();
}
}
protected static String savePage_just_html(HtmlPage page)
{
try
{
File saveFolder = new File("spool/_");
page.save(saveFolder); // this is the line causing the error with limit stack trace
}
catch ( Exception e )
{
System.out.println( "IOException was thrown: ---------------------------------- " + e.getMessage() );
e.printStackTrace();
}
return "file.html";
}
}
------------------------------------------------------
page.save(saveFolder); above is the problem line. if I take it out I dont get the limited statcktrace errors but I also dont save the html page - which I want to do
- The question is - why does this line only print a limited stacktrace
The "Caused by" means that this is a nested exception.
The "... 30 more" in a nested exception stacktrace means that those 30 lines are the same as in the primary exception.
So just look at the primary stacktrace to see those stack frames.
Apparently your exception stacktraces are being generated in a way that is suppressing (or removing) the primary exception stacktrace. This is puzzling, but it is probably being done by one of the unit testing libraries that you are using.
Getting to the bottom of this will probably involve you either debugging whatever it is that is generating the stacktrace, or writing a minimal reproducible example so that someone else can debug it.
I have the pre-trained model like Inception-v3. I want to remove the output layer and use it in image cognition. Here is the example given by tensorflow:
Just like the python framework Keras, it has a method like model.layers.pop(). I tried do it with tensorflow java api. First I tried to use dl4j, but when I imported the keras model, I got an error like this:
2017-06-15 21:15:43 INFO KerasInceptionV3Net:52 - Importing Inception model from data/inception-model.json
2017-06-15 21:15:43 INFO KerasInceptionV3Net:53 - Importing Weights model from data/inception_v3_complete
Exception in thread "main" java.lang.RuntimeException: Unknown exception.
at org.bytedeco.javacpp.hdf5$H5File.allocate(Native Method)
at org.bytedeco.javacpp.hdf5$H5File.<init>(hdf5.java:12713)
at org.deeplearning4j.nn.modelimport.keras.Hdf5Archive.<init>(Hdf5Archive.java:61)
at org.deeplearning4j.nn.modelimport.keras.KerasModel$ModelBuilder.weightsHdf5Filename(KerasModel.java:603)
at org.deeplearning4j.nn.modelimport.keras.KerasModelImport.importKerasModelAndWeights(KerasModelImport.java:176)
at edu.usc.irds.dl.dl4j.examples.KerasInceptionV3Net.<init>(KerasInceptionV3Net.java:55)
at edu.usc.irds.dl.dl4j.examples.KerasInceptionV3Net.main(KerasInceptionV3Net.java:108)
HDF5-DIAG: Error detected in HDF5 (1.10.0-patch1) thread 0:
#000: C:\autotest\HDF5110ReleaseRWDITAR\src\H5F.c line 579 in H5Fopen(): unable to open file
major: File accessibilty
minor: Unable to open file
#001: C:\autotest\HDF5110ReleaseRWDITAR\src\H5Fint.c line 1100 in H5F_open(): unable to open file: time = Thu Jun 15 21:15:44 2017,name = 'data/inception_v3_complete', tent_flags = 0
major: File accessibilty
minor: Unable to open file
#002: C:\autotest\HDF5110ReleaseRWDITAR\src\H5FD.c line 812 in H5FD_open(): open failed
major: Virtual File Layer
minor: Unable to initialize object
#003: C:\autotest\HDF5110ReleaseRWDITAR\src\H5FDsec2.c line 348 in H5FD_sec2_open(): unable to open file: name = 'data/inception_v3_complete', errno = 2, error message = 'No such file or directory', flags = 0, o_flags = 0
major: File accessibilty
minor: Unable to open file
So I went back to tensorflow. I'm going to modify the model in keras and convert the model to tensor. Here is my conversion script:
input_fld = './'
output_node_names_of_input_network = ["pred0"]
write_graph_def_ascii_flag = True
output_node_names_of_final_network = 'output_node'
output_graph_name = 'test2.pb'
from keras.models import load_model
import tensorflow as tf
import os
import os.path as osp
from keras.applications.inception_v3 import InceptionV3
from keras.applications.vgg16 import VGG16
from keras.models import Sequential
from keras.layers.core import Flatten, Dense, Dropout
from keras.layers.convolutional import Convolution2D, MaxPooling2D, ZeroPadding2D
from keras.optimizers import SGD
output_fld = input_fld + 'tensorflow_model/'
if not os.path.isdir(output_fld):
os.mkdir(output_fld)
net_model = InceptionV3(weights='imagenet', include_top=True)
num_output = len(output_node_names_of_input_network)
pred = [None]*num_output
pred_node_names = [None]*num_output
for i in range(num_output):
pred_node_names[i] = output_node_names_of_final_network+str(i)
pred[i] = tf.identity(net_model.output[i], name=pred_node_names[i])
print('output nodes names are: ', pred_node_names)
from keras import backend as K
sess = K.get_session()
if write_graph_def_ascii_flag:
f = 'only_the_graph_def.pb.ascii'
tf.train.write_graph(sess.graph.as_graph_def(), output_fld, f, as_text=True)
print('saved the graph definition in ascii format at: ', osp.join(output_fld, f))
from tensorflow.python.framework import graph_util
from tensorflow.python.framework import graph_io
constant_graph = graph_util.convert_variables_to_constants(sess, sess.graph.as_graph_def(), pred_node_names)
graph_io.write_graph(constant_graph, output_fld, output_graph_name, as_t ext=False)
print('saved the constant graph (ready for inference) at: ', osp.join(output_fld, output_graph_name))
I got the model as .pb file, but when I put it into the tensor example, The LabelImage example, I got this error:
Exception in thread "main" java.lang.IllegalArgumentException: You must feed a value for placeholder tensor 'batch_normalization_1/keras_learning_phase' with dtype bool
[[Node: batch_normalization_1/keras_learning_phase = Placeholder[dtype=DT_BOOL, shape=<unknown>, _device="/job:localhost/replica:0/task:0/cpu:0"]()]]
at org.tensorflow.Session.run(Native Method)
at org.tensorflow.Session.access$100(Session.java:48)
at org.tensorflow.Session$Runner.runHelper(Session.java:285)
at org.tensorflow.Session$Runner.run(Session.java:235)
at com.dlut.cmh.sheng.LabelImage.executeInceptionGraph(LabelImage.java:98)
at com.dlut.cmh.sheng.LabelImage.main(LabelImage.java:51)
I don't know how to solve this. Can anyone help me? Or you have another way to do this?
The error message you get from the TensorFlow Java API:
Exception in thread "main" java.lang.IllegalArgumentException: You must feed a value for placeholder tensor 'batch_normalization_1/keras_learning_phase' with dtype bool
[[Node: batch_normalization_1/keras_learning_phase = Placeholder[dtype=DT_BOOL, shape=<unknown>, _device="/job:localhost/replica:0/task:0/cpu:0"]()]]
suggests that the model is constructed in a way that requires you to feed a boolean value for the tensor named batch_normalization_1/keras_learning_phase.
So, you'd have to include that in your call to run by changing:
try (Session s = new Session(g);
Tensor result = s.runner().feed("input",image).fetch("output").run().get(0)) {
to something like:
try (Session s = new Session(g);
Tensor learning_phase = Tensor.create(false);
Tensor result = s.runner().feed("input", image).feed("batch_normalization_1/keras_learning_phase", learning_phase).fetch("output").run().get(0)) {
The names of nodes you feed and fetch depend on the model, so it's possible that the names of the 'input' and 'output' nodes are different as well.
You might also want to consider using the TensorFlow SavedModel format (see also https://github.com/tensorflow/serving/issues/310#issuecomment-297015251)
Hope that helps
Can we store the value returned from omdb function searchOneMovie(moviename) so that it can be displayed as output in JSP page when button is clicked? I am developing a Web-Application where on clicking the button I should get the details of movie as searched by user but by using searchOneMovie() I am getting the output in console not in web page. Please tell how to display searchOneMovie() value in JSP page.
CODE:
public void search(HttpServletResponse response, String moviename) throws OmdbSyntaxErrorException, OmdbConnectionErrorException, OmdbMovieNotFoundException, IOException {
Omdb o = new Omdb();
try {
PrintWriter out = response.getWriter();
out.print(o.searchOneMovie(moviename));
} catch (Exception ignore) {
}
}
OUTPUT: This output is coming on console but i wan't it in webpage please help.
Mar 13, 2017 8:13:51 PM com.omdbapi.RestClient execute
INFO: executing GET http://www.omdbapi.com/?t=star+wars HTTP/1.1
Mar 13, 2017 8:13:52 PM com.omdbapi.Omdb resultToJson
INFO: received {"Title":"Star Wars: Episode IV - A New Hope","Year":"1977","Rated":"PG","Released":"25 May 1977","Runtime":"121 min","Genre":"Action, Adventure, Fantasy","Director":"George Lucas","Writer":"George Lucas","Actors":"Mark Hamill, Harrison Ford, Carrie Fisher, Peter Cushing","Plot":"Luke Skywalker joins forces with a Jedi Knight, a cocky pilot, a wookiee and two droids to save the galaxy from the Empire's world-destroying battle-station, while also attempting to rescue Princess Leia from the evil Darth Vader.","Language":"English","Country":"USA","Awards":"Won 6 Oscars. Another 50 wins & 28 nominations.","Poster":"https://images-na.ssl-images-amazon.com/images/M/MV5BYzQ2OTk4N2QtOGQwNy00MmI3LWEwNmEtOTk0OTY3NDk2MGJkL2ltYWdlL2ltYWdlXkEyXkFqcGdeQXVyNjc1NTYyMjg#._V1_SX300.jpg","Metascore":"92","imdbRating":"8.7","imdbVotes":"963,318","imdbID":"tt0076759","Type":"movie","Response":"True"}
I am trying to create a UDF for importing into Pig that matches a Regex pattern on a date. The Regex has been tested and works accordingly, but I am having trouble with the following code:
package com.date.format;
import java.io.IOException;
import java.util.regex.Matcher;
import java.util.regex.Pattern;
import org.apache.pig.EvalFunc;
import org.apache.pig.data.Tuple;
public class DATERANGE extends EvalFunc<String> {
#Override
public String exec(Tuple arg0) throws IOException {
try
{
String pattern = "(Oct\\W(?:1[5-9]|2[0-3])\\W(?:(?:0?9|10):\\d{2}:\\d{2}|11:00:00))";
Pattern pat = Pattern.compile(pattern);
Matcher match = pat.matcher((String) arg0.get(0));
if(match.find())
{
return match.group(0);
}
else return "none";
}
catch(Exception e)
{
throw new IOException("Caught exception processing input row ", e);
}
}
}
After compiling the above java code and exporting it as a jar and running it inside Hadoop using the following Pig script:
register 'DATEFormat.jar';
ld = LOAD 'dates/date_data_three' AS (date:chararray);
loop = foreach ld generate com.date.format.DATERANGE(date) as d:chararray;
dump loop;
I get the following error:
ERROR 2078: Caught error from UDF: com.date.format.DATERANGE [Caught exception
processing input row ]
org.apache.pig.impl.logicalLayer.FrontendException: ERROR 1066: Unable to open iterator
for alias loop
at org.apache.pig.PigServer.openIterator(PigServer.java:912)
at org.apache.pig.tools.grunt.GruntParser.processDump(GruntParser.java:752)
at org.apache.pig.tools.pigscript.parser.PigScriptParser.parse(PigScriptParser.java:372)
at org.apache.pig.tools.grunt.GruntParser.loadScript(GruntParser.java:566)
at org.apache.pig.tools.grunt.GruntParser.processScript(GruntParser.java:513)
at org.apache.pig.tools.pigscript.parser.PigScriptParser.
Script(PigScriptParser.java:1014)
at org.apache.pig.tools.pigscript.parser.PigScriptParser.parse(PigScriptParser.java:550)
at org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:228)
at org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:203)
at org.apache.pig.tools.grunt.Grunt.run(Grunt.java:66)
at org.apache.pig.Main.run(Main.java:542)
at org.apache.pig.Main.main(Main.java:156)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
Caused by: org.apache.pig.PigException: ERROR 1002: Unable to store alias loop
at org.apache.pig.PigServer.storeEx(PigServer.java:1015)
at org.apache.pig.PigServer.store(PigServer.java:974)
at org.apache.pig.PigServer.openIterator(PigServer.java:887)
... 16 more
The data file contains dates as shown below:
Wed Oct 15 09:26:09 BST 2014
Wed Oct 15 19:26:09 BST 2014
Wed Oct 18 08:26:09 BST 2014
Wed Oct 23 10:26:09 BST 2014
Sun Oct 05 09:26:09 BST 2014
Wed Nov 20 19:26:09 BST 2014
Does anybody know the correct way to implement a Java UDF for Pig that would work with the Regex I have provided?
Thanks
I recommend you to use REGEX_EXTRACT build-in command, this will be very easy instead of writing UDF.
ld = LOAD 'input.txt' AS (date:chararray);
loop = foreach ld generate REGEX_EXTRACT(date,'(Oct\\W(?:1[5-9]|2[0-3])\\W(?:(?:0?9|10):\\d{2}:\\d{2}|11:00:00))',1) as d:chararray;
C = FILTER loop by d is not null;
D = FOREACH C GENERATE $0;
DUMP D;
Output:
(Oct 15 09:26:09)
(Oct 23 10:26:09)
Your Regex UDF also working fine for me. i just copied your input and java code and executed locally. It works perfectly. Please see the below output that i got from your UDF code. I guess you may need to check your classpath are properly set or not.
(Oct 15 09:26:09)
(none)
(none)
(Oct 23 10:26:09)
(none)
(none)
Even better, you could use ToDate:
load your data into filtered_raw_financings_csvs with close_date as a chararray:
financings_csvs = FOREACH filtered_raw_financings_csvs
GENERATE name,
city,
state,
(close_date==''?NULL:ToDate(close_date, 'dd-MMM-yy')) AS close_date
;
Build your date format string as described here:
http://docs.oracle.com/javase/6/docs/api/java/text/SimpleDateFormat.html
This snippet is shown in context here:
http://nathan.vertile.com/blog/2015/04/17/handling-dates-in-hadoop-pig/
I have the below xml, Updated added "symptoms"
<EBF>
<EBFINFO>
<EBFNUM>EBF262323</EBFNUM>
<RELEASEDATETIME>May 06, 2011</RELEASEDATETIME>
<SYMPTOMS>
<br> INFA252994 - 910 : While running concurrent session Workflow manager hangs and workflow monitor does not respond</br>
<br> INFA262323 - 910 : pmcmd, pmdtm and all LM clients on Windows fail to connect to IS when IPv6 is installed but all IPv6 interfaces are disabled</br>
</SYMPTOMS>
<FILES>
<FILE>
<PATH>H:\EBF262323\EBF262323_Client_Installer_win32_x86\EBFs\clients\PmClient\client\bin\ACE.dll_bak</PATH>
<CHECKSUM>303966974</CHECKSUM>
<AFFECTEDFILES>
<CHECKSUM>3461283269</CHECKSUM>
<PATH>C:\clients\PmClient\CommandLineUtilities\PC\server\bin\ACE.dll</PATH>
<PATH>C:\clients\PmClient\client\bin\ACE.dll</PATH>
</AFFECTEDFILES>
</FILE>
</FILES>
<NOTES>
</NOTES>
</EBFINFO>
</EBF>
Note: In the above xml ebf\enfinfo\files\file\affectedfiles\path and ebf\enfinfo\files\file can be one or more
which I am parsing and generating another xml out of it
def records = new XmlParser().parseText(rs)
csm.ebfHistory(){
records.EBFINFO.each{
ebfHistory_info(num:it.EBFNUM.text(),
release_date_time:it.RELEASEDATETIME.text()
){
it.FILES.FILE.each{ //says Exception in thread "main" java.lang.NullPointerException: Cannot get property 'FILES' on null object
ebfHistory_fileinfo(file_path:it.PATH.text(),
file_checksum:it.CHECKSUM.text()
){
ebfHistory_fileinfo_affectedfiles(
afile_checksum:it.CHECKSUM.text(),
afile_path:it.PATH.text()
)
}
}
}
}
}
something like below
<ebfHistory>
<ebfHistory_info num="EBF262323",release_date_time="May 06, 2011">
<ebfHistory_fileinfo file_checksum="303966974">
<ebfHistory_fileinfo_affectedfiles afile_checksum="3461283269">
<path>C:\clients\PmClient\CommandLineUtilities\PC\server\bin\ACE.dll</path>
<path>C:\clients\PmClient\client\bin\ACE.dll</path>
</ebfHistory_fileinfo_affectedfiles>
</ebfHistory_fileinfo>
</ebfHistory_info>
</ebfHistory>
but instead I get Exception in thread "main" java.lang.NullPointerException: Cannot get property 'FILES' on null object where am i going wrong? Please help somebody. Thanks
Updated code (working)
def records = new XmlParser().parseText(rs)
csm.ebfHistory(){
records.EBFINFO.each{ ebfinfo ->
ebfHistory_info(num:ebfinfo.EBFNUM.text(),
release_date_time:ebfinfo.RELEASEDATETIME.text())
{
ebfinfo.SYMPTOMS.br.each{
ebfHistory_symptom(name:it.text())
}
}
}
ebfHistory_dump(rs){
"${rs}"
}
}
The it no longer refers to each EBFINFO, because you are in another closure--the ebfHistory_info closure.
Instead, explicitly name the EBFINFO object:
records.EBFINFO.each { ebfinfo -> // <-- Give it a name
ebfHistory_info(num:it.EBFNUM.text(),
release_date_time:it.RELEASEDATETIME.text()) {
ebfinfo.FILES.FILE.each { // <-- Use the name here
Same thing in the ebfHistory_fileinfo_affectedfiles parameters.