I'm trying to run a really simple dataflow job, just taking some data in BigQuery, processing it a bit and putting it in a new bigquery table
Pipeline p = Pipeline.create(
PipelineOptionsFactory.fromArgs(args).withValidation().create());
p.apply(BigQueryIO.Read.fromQuery("SELECT * FROM realtime.status_6_output_11"));
p.run();
However whenever I run it I get the following pretty undescriptive NullPointerException:
Exception in thread "main" java.lang.NullPointerException
at java.util.regex.Matcher.getTextLength(Matcher.java:1283)
at java.util.regex.Matcher.reset(Matcher.java:309)
at java.util.regex.Matcher.<init>(Matcher.java:229)
at java.util.regex.Pattern.matcher(Pattern.java:1093)
at com.google.cloud.dataflow.sdk.util.IOChannelUtils.getFactory(IOChannelUtils.java:174)
at com.google.cloud.dataflow.sdk.io.BigQueryIO$Read$Bound.apply(BigQueryIO.java:553)
at com.google.cloud.dataflow.sdk.io.BigQueryIO$Read$Bound.apply(BigQueryIO.java:387)
at com.google.cloud.dataflow.sdk.runners.PipelineRunner.apply(PipelineRunner.java:74)
at com.google.cloud.dataflow.sdk.runners.DirectPipelineRunner.apply(DirectPipelineRunner.java:247)
at com.google.cloud.dataflow.sdk.Pipeline.applyInternal(Pipeline.java:367)
at com.google.cloud.dataflow.sdk.Pipeline.applyTransform(Pipeline.java:274)
at com.google.cloud.dataflow.sdk.values.PBegin.apply(PBegin.java:47)
at com.google.cloud.dataflow.sdk.Pipeline.apply(Pipeline.java:156)
at com.noraway.conductor.NormalizedPipeline.main(NormalizedPipeline.java:42)
I think there's a problem with my command line arguments (don't have any right now) but I'm not sure what that would be.
It looks like there is a missing --tempLocation for BigQuery to use. The obscure error message is fixed as part of https://github.com/GoogleCloudPlatform/DataflowJavaSDK/issues/313.
Related
I am trying to update into DCTM through java code, below is the code snippet
IDfDocument communication = (IDfDocument) getDfSession().getObject(DfId.valueOf(communicationId));
communication.setString(ATTR_STATUS, status);
communication.save();
but I am getting the below error
Caused by: DfException:: THREAD: be.ing.ca.xpression.DCTM001P-1; MSG: [DM_OBJ_MGR_E_VERSION_MISMATCH]error: "save of object
090283e589bf689d of type xx_document failed because of version
mismatch: old version was 4"; ERRORCODE: 100; NEXT: null
I thinki am getting this error because there is another process which is trying to modify the object ,and when more than one process try to modify anyobject DCTM throws this exception,
But after lot of searching i dident found any solution which can solve this error
If anyone knows the solution please reply..
Link that i refer
http://www.javablog.fr/?s=version+mismatch
Try calling a fetch() on the object before doing updates.
communication.fetch()
There are some optional parameters AFAIK, but it's been a while since I've been fiddling with DCTM.
Best of luck!
I am using Java with Spark. I need to create a Tuple2 Dataset by combining two separate Datasets. I am using joinWith as I want the individual objects to remain intact (cannot use join). However this is failing with:
Exception in thread "main" java.lang.UnsupportedOperationException: Cannot evaluate expression: NamePlaceholder
I tried it with and without Alias but am still getting the same error. What am I doing wrong?
Dataset<MyObject1> dsOfMyObject1;
Dataset<MyObject2> dsOfMyObject2;
Dataset<Tuple2<MyObject1, MyObject2>> tuple2Dataset =
dsOfMyObject1.as("A").
joinWith(dsOfMyObject2.as("B"),col("A.keyfield")
.equalTo(col("B.keyfield")));
Exception in thread "main" java.lang.UnsupportedOperationException: Cannot evaluate
expression: NamePlaceholder
at org.apache.spark.sql.catalyst.expressions.Unevaluable$class.eval(Expression.scala:255)
at org.apache.spark.sql.catalyst.expressions.NamePlaceholder$.eval(complexTypeCreator.scala:243)
at org.apache.spark.sql.catalyst.expressions.CreateNamedStructLike$$anonfun$names$1.apply(complexTypeCreator.scala:289)
at org.apache.spark.sql.catalyst.expressions.CreateNamedStructLike$$anonfun$names$1.apply(complexTypeCreator.scala:289)
at scala.collection.immutable.List.map(List.scala:274)
I am using com.cloudera.crunch version: '0.3.0-3-cdh-5.2.1'.
I have a small program that reads some AVROs and filters out invalid data based on some criteria. I am using pipeline.write(PCollection, AvroFileTarget) to write the invalid data output. It works fine in production run.
For unit testing this piece of code, I use MemPipeline instance.
But, it fails while writing the output in that case.
I get error:
java.lang.UnsatisfiedLinkError: org.apache.hadoop.util.NativeCrc32.nativeComputeChunkedSumsByteArray(II[BI[BIILjava/lang/String;JZ)V
at org.apache.hadoop.util.NativeCrc32.nativeComputeChunkedSumsByteArray(Native Method)
at org.apache.hadoop.util.NativeCrc32.calculateChunkedSumsByteArray(NativeCrc32.java:86)
at org.apache.hadoop.util.DataChecksum.calculateChunkedSums(DataChecksum.java:428)
at org.apache.hadoop.fs.FSOutputSummer.writeChecksumChunks(FSOutputSummer.java:197)
at org.apache.hadoop.fs.FSOutputSummer.flushBuffer(FSOutputSummer.java:163)
at org.apache.hadoop.fs.FSOutputSummer.flushBuffer(FSOutputSummer.java:144)
at org.apache.hadoop.fs.FSOutputSummer.write(FSOutputSummer.java:78)
at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:50)
at java.io.DataOutputStream.writeBytes(DataOutputStream.java:276)
at com.cloudera.crunch.impl.mem.MemPipeline.write(MemPipeline.java:159)
Any idea what's wrong?
Hadoop environment variable should be configured properly along with hadoop.dll and winutils.exe.
Also pass the JVM argument while executing MR job/application
-Djava.library.path=HADOOP_HOME/lib/native
In Hadoop program, I tried to compress the map result, I wrote the following code:
conf.setBoolean("mapred.compress.map.output",true);
conf.setClass("mapred.map.output.compression.codec",GzipCodec.class,CompressionCodec.class);
and run it, I got the below exception, anybody know the reason?
WARN mapred.LocalJobRunner: job_local1149103367_0001
java.io.IOException: not a gzip file
at org.apache.hadoop.io.compress.zlib.BuiltInGzipDecompressor.processBasicHeader(BuiltInGzipDecompressor.java:495)
at org.apache.hadoop.io.compress.zlib.BuiltInGzipDecompressor.executeHeaderState(BuiltInGzipDecompressor.java:256)
at org.apache.hadoop.io.compress.zlib.BuiltInGzipDecompressor.decompress(BuiltInGzipDecompressor.java:185)
at org.apache.hadoop.io.compress.DecompressorStream.decompress(DecompressorStream.java:91)
at org.apache.hadoop.io.compress.DecompressorStream.read(DecompressorStream.java:85)
at org.apache.hadoop.io.compress.DecompressorStream.read(DecompressorStream.java:72)
at java.io.DataInputStream.readByte(DataInputStream.java:265)
at org.apache.hadoop.io.WritableUtils.readVLong(WritableUtils.java:308)
at org.apache.hadoop.io.WritableUtils.readVInt(WritableUtils.java:329)
at org.apache.hadoop.mapred.IFile$Reader.positionToNextRecord(IFile.java:400)
at org.apache.hadoop.mapred.IFile$Reader.nextRawKey(IFile.java:425)
at org.apache.hadoop.mapred.Merger$Segment.nextRawKey(Merger.java:323)
at org.apache.hadoop.mapred.Merger$MergeQueue.merge(Merger.java:613)
at org.apache.hadoop.mapred.Merger$MergeQueue.merge(Merger.java:558)
at org.apache.hadoop.mapred.Merger.merge(Merger.java:70)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:385)
at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:445)
today, I tested it again, I found that if the put the 2 lines before the job object was created,
Job job = new Job(conf, "MyCounter");
the error will happen, if after that, no error will occur, why this happen?
are you using MRv1 or MRv2. If you are using MRv2 then use the following job config.
config.setBoolean("mapreduce.output.fileoutputformat.compress", true);
config.setClass("mapreduce.output.fileoutputformat.compress.codec",GzipCodec.class,CompressionCodec.class);
additionally you can set
config.set("mapreduce.output.fileoutputformat.compress.type",CompressionType.NONE.toString());
BLOCK|NONE|RECORD are three types of compression.
I am trying to call the function module "CSAP_MAT_BOM_MAINTAIN" to create a BOM in SAP but i get error.
IFunctionTemplate ft = mRepository.getFunctionTemplate("CSAP_MAT_BOM_MAINTAIN");
System.out.println(" Functional Template Created ");
if (ft == null){return;}
JCO.Function function = ft.getFunction();
JCO.ParameterList importparams =function.getImportParameterList();
// Setting HeadData Structure Information
//importparams.setValue("C000000609", "CHANGE_NO");
importparams.setValue("CPF10104", "MATERIAL");
importparams.setValue("1", "BOM_USAGE");
importparams.setValue("0001", "PLANT");
importparams.setValue("01", "ALTERNATIVE");
importparams.setValue("11.11.2011", "VALID_FROM");
importparams.setValue("X", "FL_COMMIT_AND_WAIT");
importparams.setValue("X", "FL_BOM_CREATE");
importparams.setValue("X", "FL_NEW_ITEM");
importparams.setValue("X", "FL_COMPLETE");
importparams.setValue("X", "FL_DEFAULT_VALUES");
JCO.Structure headStructure = importparams.getStructure("I_STKO");
headStructure.setValue("01", "BOM_STATUS");
headStructure.setValue("1", "BASE_QUAN");
headStructure.setValue("KG", "BASE_UNIT");
headStructure.setValue("BOM01", "BOM_GROUP");
JCO.Table stpo = function.getTableParameterList().getTable("T_STPO");
stpo.appendRow();
stpo.setValue("BOM Position 2.1", "ITEM_TEXT1");
stpo.setValue("L", "ITEM_CATEG");
stpo.setValue("L", "ID_ITM_CTG");
stpo.setValue("0010", "ITEM_NO");
stpo.setValue("0010", "ID_ITEM_NO");
stpo.setValue("13", "COMP_QTY");
stpo.setValue("KG", "COMP_UNIT");
stpo.setValue("00000001", "ITEM_NODE");
stpo.setValue("00000001", "ITEM_COUNT");
stpo.setValue("000000000000000000", "DEP_LINK");
stpo.setValue("12345-R6000001", "COMPONENT");
//stpo.setValue("12345-R6000001", "ID_COMP");
JCO.Table stpu = function.getTableParameterList().getTable("T_STPU");
stpu.appendRow();
stpu.setValue("0", "POINTER");
stpu.setValue("00000000", "STLKN");
stpu.setValue("0010", "STPOZ");
stpu.setValue("0001", "UPOSZ");
stpu.setValue("46", "UPMNG");
stpu.setValue("T1", "EBORT");
I get error:
Exception in thread "main" com.sap.mw.jco.JCO$AbapException: (126) ERROR: Terminate processing.
After searching over the internet i found that this error comes when we have wrong input params.
But i am unable to find the error.
Please note that i have limited knowledge about ABAP programming.
Can any one help me?
(126) ABAP EXCEPTION: An exception has been thrown by a function module in the remote system.
I think you should carefully review all of these new parameters, since one is causing a bug in your function.
If you have access to the SAP system, you can run transaction ST22 to get a detailed error log. However, you may need to de-bug within SAP as per Raj's suggestion.