How do I create a csv file with dynamically rows - java

I just can't figure out how to create a csv file with dynamically rows.
I used commons-csv to write 3 rows to a file but I need some more.
I'm reading values out of a database line by line which look like that:
[ID:1,TSP:'2018-01-01 00:10:00', VALUE: 856830, VAL1:'36,704'],
[ID:4,TSP:'2018-01-01 00:12:00', VALUE: 736830, VAL1:'1,14'],
[ID:5,TSP:'2018-01-01 00:10:00', VALUE: 656830, VAL1:'12,504'],
[ID:5,TSP:'2018-01-01 00:50:00', VALUE: 936830, VAL1:'5,18'],
[ID:3,TSP:'2018-01-01 00:10:00', VALUE: 736860, VAL1:'3,4'],
[ID:4,TSP:'2018-01-01 00:50:00', VALUE: 726830, VAL1:'9,14']
I wanna create a .csv file with a formatting like that:
TSP(2018-01-01 00:10:00) | VALUE_ID1 | VAL_ID1 | VALUE_ID3 | VAL_ID3 | VALUE_ID5 | VAL_ID5
TSP(2018-01-01 00:12:00) | VALUE_ID4 | VAL_ID4
TSP(2018-01-01 00:50:00) | VALUE_ID4 | VAL_ID4 | VALUE_ID5 | VAL_ID5
I hope anyone can help me out with this one, cuz it kills me...
Thanks in advance!
EDIT:
I used groupBy on the TSP so I have the following pattern now:
TSP(2018-01-01 00:10:00):[[ID:1,TSP(2018-01-01 00:10:00),VALUE_ID1,VAL_ID1], [ID:3,TSP(2018-01-01 00:10:00),VALUE_ID3,VAL_ID3], [ID:5,TSP(2018-01-01 00:10:00),VALUE_ID5,VAL_ID5]]
TSP(2018-01-01 00:12:00):[[ID:4,TSP(2018-01-01 00:10:00),VALUE_ID4,VAL_ID4]]
etc. that's where I got stuck now –

You can use below Java API for CVS generation:
OpenCSV
Apache Commons CSV
Both API has read/write API for working with CSV
Examples of APIs:
OpenCSV
Apache Commons CSV
I would prefer to use OpenCSV

I believe univocity-parsers can help you greatly here. Check this section of its tutorial. You can write your rows like this:
File outputFile = new File("/path/to/your.csv");
CsvWriter writer = new CsvWriter(outputFile, new CsvWriterSettings());
String currentTime = null;
for(String[] row : getDataFromYourDatabase()){
if(row[1].equals(currentTime)){ //check the time signature
writer.addValue(row[0]); // write the ID on a column
writer.addValue(row[2]); // write the value associated with the ID on the next column
} else {
if(currentTime != null){
writer.writeValuesToRow(); // generates a row.
}
currentTime = row[1];
writer.addValue(row[1]); //writes the time to the first column of the next row
}
}
Assuming method getDataFromYourDatabase() returns all rows ordered by the time (i.e. the values that look like this: TSP(2018-01-01 00:10:00)) this should work great.
Hope it helps.
Disclosure: I'm the author of this library. It's open-source and free (Apache 2.0 license)

This is the best I can come up with using Fuzzy-Csv (https://github.com/kayr/fuzzy-csv/)
package fuzzycsv
import static fuzzycsv.FuzzyStaticApi.reduce
def data = [[ID: 1, TSP: '2018-01-01 00:10:00', VALUE: 856830, VAL1: '36,704'],
[ID: 4, TSP: '2018-01-01 00:12:00', VALUE: 736830, VAL1: '1,14'],
[ID: 5, TSP: '2018-01-01 00:10:00', VALUE: 656830, VAL1: '12,504'],
[ID: 5, TSP: '2018-01-01 00:50:00', VALUE: 936830, VAL1: '5,18'],
[ID: 3, TSP: '2018-01-01 00:10:00', VALUE: 736860, VAL1: '3,4'],
[ID: 4, TSP: '2018-01-01 00:50:00', VALUE: 726830, VAL1: '9,14']]
def list =
FuzzyCSVTable
.fromMapList(data)
.transform('ID') { "VAL_ID${it.ID}" }
.transform('TSP') { "TSP_(${it.TSP})" }
.summarize('TSP', reduce { it['ID'] }.az('IDS'))
.with { tbl(csv.collect { it.flatten() }[1..-1]) }
println(list.toCsvString())
output
"TSP_(2018-01-01 00:10:00)","VAL_ID1","VAL_ID5","VAL_ID3"
"TSP_(2018-01-01 00:12:00)","VAL_ID4"
"TSP_(2018-01-01 00:50:00)","VAL_ID5","VAL_ID4"

Related

Convert sql data to Json Array [java spark]

I have dataframe , wanted to convert into JSON ARRAY Please find the example below
Dataframe
+------------+--------------------+----------+----------------+------------------+--------------
| Name| id|request_id|create_timestamp|deadline_timestamp|
+------------+--------------------+----------+----------------+------------------+--------------
| Freeform|59bbe3ad-f487-44| htvjiwmfe| 1589155200000| 1591272659556
| D23|59bbe3ad-f487-44| htvjiwmfe| 1589155200000| 1591272659556
| Stores|59bbe3ad-f487-44| htvjiwmfe| 1589155200000| 1591272659556
|VacationClub|59bbe3ad-f487-44| htvjiwmfe| 1589155200000| 1591272659556
Wanted in Json Like below:
[
{
"testname":"xyz",
"systemResponse":[
{
"name":"FGH",
"id":"59bbe3ad-f487-44",
"request_id":1590791280,
"create_timestamp":1590799280
},
{
"name":"FGH",
"id":"59bbe3ad-f487-44",
"request_id":1590791280,
"create_timestamp":1590799280,
}
]
}
]
You can define 2 beans
Create Array from the 1st DF as Array of inner Beans
Define a parent bean with testname and requestDetailArray as Array
Please also find code inline comments
object DataToJsonArray {
def main(args: Array[String]): Unit = {
val spark = Constant.getSparkSess
import spark.implicits._
//Load you dataframe
val requestDetailArray = List(
("Freeform", "59bbe3ad-f487-44", "htvjiwmfe", "1589155200000", "1591272659556"),
("D23", "59bbe3ad-f487-44", "htvjiwmfe", "1589155200000", "1591272659556"),
("Stores", "59bbe3ad-f487-44", "htvjiwmfe", "1589155200000", "1591272659556"),
("VacationClub", "59bbe3ad-f487-44", "htvjiwmfe", "1589155200000", "1591272659556")
).toDF
//Map your Dataframe to RequestDetails bean
.map(row => RequestDetails(row.getString(0), row.getString(1), row.getString(2), row.getString(3), row.getString(4)))
//Collect it as Array
.collect()
//Create another data frme with List[BaseClass] and set the (testname,Array[RequestDetails])
List(BaseClass("xyz", requestDetailArray)).toDF()
.write
//Output your Dataframe as JSON
.json("/json/output/path")
}
}
case class RequestDetails(Name: String, id: String, request_id: String, create_timestamp: String, deadline_timestamp: String)
case class BaseClass(testname: String = "xyz", systemResponse: Array[RequestDetails])
Check below code.
import org.apache.spark.sql.functions._
df.withColumn("systemResponse",
array(
struct("id","request_id","create_timestamp","deadline_timestamp").as("data")
)
)
.select("systemResponse")
.toJSON
.select(col("value").as("json_data"))
.show(false)
+-----------------------------------------------------------------------------------------------------------------------------------------------+
|json_data |
+-----------------------------------------------------------------------------------------------------------------------------------------------+
|{"systemResponse":[{"id":"59bbe3ad-f487-44","request_id":"htvjiwmfe","create_timestamp":"1589155200000","deadline_timestamp":"1591272659556"}]}|
|{"systemResponse":[{"id":"59bbe3ad-f487-44","request_id":"htvjiwmfe","create_timestamp":"1589155200000","deadline_timestamp":"1591272659556"}]}|
|{"systemResponse":[{"id":"59bbe3ad-f487-44","request_id":"htvjiwmfe","create_timestamp":"1589155200000","deadline_timestamp":"1591272659556"}]}|
|{"systemResponse":[{"id":"59bbe3ad-f487-44","request_id":"htvjiwmfe","create_timestamp":"1589155200000","deadline_timestamp":"1591272659556"}]}|
+-----------------------------------------------------------------------------------------------------------------------------------------------+
Updated
scala> :paste
// Entering paste mode (ctrl-D to finish)
df.withColumn("systemResponse",
array(
struct("id","request_id","create_timestamp","deadline_timestamp").as("data")
)
)
.withColumn("testname",lit("xyz"))
.select("testname","systemResponse")
.toJSON
.select(col("value").as("json_data"))
.show(false)
// Exiting paste mode, now interpreting.
+----------------------------------------------------------------------------------------------------------------------------------------------------------------+
|json_data |
+----------------------------------------------------------------------------------------------------------------------------------------------------------------+
|{"testname":"xyz","systemResponse":[{"id":"59bbe3ad-f487-44","request_id":"htvjiwmfe","create_timestamp":"1589155200000","deadline_timestamp":"1591272659556"}]}|
|{"testname":"xyz","systemResponse":[{"id":"59bbe3ad-f487-44","request_id":"htvjiwmfe","create_timestamp":"1589155200000","deadline_timestamp":"1591272659556"}]}|
|{"testname":"xyz","systemResponse":[{"id":"59bbe3ad-f487-44","request_id":"htvjiwmfe","create_timestamp":"1589155200000","deadline_timestamp":"1591272659556"}]}|
|{"testname":"xyz","systemResponse":[{"id":"59bbe3ad-f487-44","request_id":"htvjiwmfe","create_timestamp":"1589155200000","deadline_timestamp":"1591272659556"}]}|
+----------------------------------------------------------------------------------------------------------------------------------------------------------------+

Java Apache Spark flatMaps & Data Wrangling

I have to pivot the data in a file and then store it in another file. I am having some difficulty pivoting the data.
I have multiple files, that contain data which looks somewhat like I show below. The columns are variable lengths. I am trying to merge the files, first. But for some reason, the output is not correct. I haven't even tried the pivot method, but am not sure how to use it either.
How can this be achieved?
File 1:
0,26,27,30,120
201008,100,1000,10,400
201009,200,2000,20,500
201010,300,3000,30,600
File 2:
0,26,27,30,120,145
201008,100,1000,10,400,200
201009,200,2000,20,500,100
201010,300,3000,30,600,150
File 3:
0,26,27,120,145
201008,100,10,400,200
201009,200,20,500,100
201010,300,30,600,150
Output:
201008,26,100
201008,27,1000
201008,30,10
201008,120,400
201008,145,200
201009,26,200
201009,27,2000
201009,30,20
201009,120,500
201009,145,100
.....
I am not quite familiar with Spark, but am trying to use flatMap and flatMapValues. I am not sure how I can use it for now, but would appreciate some guidance.
import org.apache.commons.lang.StringUtils;
import org.apache.log4j.Level;
import org.apache.log4j.Logger;
import org.apache.spark.SparkConf;
import org.apache.spark.api.java.JavaRDD;
import org.apache.spark.api.java.JavaSparkContext;
import org.apache.spark.api.java.function.Function;
import org.apache.spark.api.java.function.VoidFunction;
import org.apache.spark.sql.SparkSession;
import lombok.extern.slf4j.Slf4j;
#Slf4j
public class ExecutionTest {
public static void main(String[] args) {
Logger.getLogger("org.apache").setLevel(Level.WARN);
Logger.getLogger("org.spark_project").setLevel(Level.WARN);
Logger.getLogger("io.netty").setLevel(Level.WARN);
log.info("Starting...");
// Step 1: Create a SparkContext.
boolean isRunLocally = Boolean.valueOf(args[0]);
String filePath = args[1];
SparkConf conf = new SparkConf().setAppName("Variable File").set("serializer",
"org.apache.spark.serializer.KryoSerializer");
if (isRunLocally) {
log.info("System is running in local mode");
conf.setMaster("local[*]").set("spark.executor.memory", "2g");
}
SparkSession session = SparkSession.builder().config(conf).getOrCreate();
JavaSparkContext jsc = new JavaSparkContext(session.sparkContext());
jsc.textFile(filePath, 2)
.map(new Function<String, String[]>() {
private static final long serialVersionUID = 1L;
#Override
public String[] call(String v1) throws Exception {
return StringUtils.split(v1, ",");
}
})
.foreach(new VoidFunction<String[]>() {
private static final long serialVersionUID = 1L;
#Override
public void call(String[] t) throws Exception {
for (String string : t) {
log.info(string);
}
}
});
}
}
Solution in Scala as I am not a JAVA person, you should be able to adapt. And add sorting, cache, etc.
Data is as follows, 3 files with duplicate entry evident, get rid of that if you do not want.
0, 5,10, 15 20
202008, 5,10, 15, 20
202009,10,20,100,200
8 rows generated above.
0,888,999
202008, 5, 10
202009, 10, 20
4 rows generated above.
0, 5
202009,10
1 row, which is a duplicate.
// Bit lazy with columns names, but anyway.
import org.apache.spark.sql.functions.input_file_name
import org.apache.spark.sql.expressions.Window
import org.apache.spark.sql.functions._
import spark.implicits._
val inputPath: String = "/FileStore/tables/g*.txt"
val rdd = spark.read.text(inputPath)
.select(input_file_name, $"value")
.as[(String, String)]
.rdd
val rdd2 = rdd.zipWithIndex
val rdd3 = rdd2.map(x => (x._1._1, x._2, x._1._2.split(",").toList.map(_.toInt)))
val rdd4 = rdd3.map { case (pfx, pfx2, list) => (pfx,pfx2,list.zipWithIndex) }
val df = rdd4.toDF()
df.show(false)
df.printSchema()
val df2 = df.withColumn("rankF", row_number().over(Window.partitionBy($"_1").orderBy($"_2".asc)))
df2.show(false)
df2.printSchema()
val df3 = df2.withColumn("elements", explode($"_3"))
df3.show(false)
df3.printSchema()
val df4 = df3.select($"_1", $"rankF", $"elements".getField("_1"), $"elements".getField("_2")).toDF("fn", "line_num", "val", "col_pos")
df4.show(false)
df4.printSchema()
df4.createOrReplaceTempView("df4temp")
val df51 = spark.sql("""SELECT hdr.fn, hdr.line_num, hdr.val AS pfx, hdr.col_pos
FROM df4temp hdr
WHERE hdr.line_num <> 1
AND hdr.col_pos = 0
""")
df51.show(100,false)
val df52 = spark.sql("""SELECT t1.fn, t1.val AS val1, t1.col_pos, t2.line_num, t2.val AS val2
FROM df4temp t1, df4temp t2
WHERE t1.col_pos <> 0
AND t1.col_pos = t2.col_pos
AND t1.line_num <> t2.line_num
AND t1.line_num = 1
AND t1.fn = t2.fn
""")
df52.show(100,false)
df51.createOrReplaceTempView("df51temp")
df52.createOrReplaceTempView("df52temp")
val df53 = spark.sql("""SELECT DISTINCT t1.pfx, t2.val1, t2.val2
FROM df51temp t1, df52temp t2
WHERE t1.fn = t2.fn
AND t1.line_num = t2.line_num
""")
df53.show(false)
returns:
+------+----+----+
|pfx |val1|val2|
+------+----+----+
|202008|888 |5 |
|202009|999 |20 |
|202009|20 |200 |
|202008|5 |5 |
|202008|10 |10 |
|202009|888 |10 |
|202008|15 |15 |
|202009|5 |10 |
|202009|10 |20 |
|202009|15 |100 |
|202008|20 |20 |
|202008|999 |10 |
+------+----+----+
What we see is Data Wrangling requiring massaged data for tempview creations and JOINing with SQL appropriately.
The key here is to know how to massage the data to make things easy. Note no groupBy etc. Per file, with varying length stuff, JOINing not attempted in RDD, too inflexible. Rank shows line#, so you know the first line with the 0 business.
This is what we call Data Wrangling. This is what we also call hard work for a few points on SO. This is one of my best efforts, and also one of the last of such efforts.
Weakness of solution is a lot of work to get 1st record of a file, there are alternatives. https://www.cyberciti.biz/faq/unix-linux-display-first-line-of-file/ preprocesing is what I would realistically consider.

Java Spark : Spark Bug Workaround for Datasets Joining with unknow Join Column Names

I am using Spark 2.3.1 with Java.
I have encountered what (I think), is this known bug of Spark.
Here is my code :
public Dataset<Row> compute(Dataset<Row> df1, Dataset<Row> df2, List<String> columns){
Seq<String> columns_seq = JavaConverters.asScalaIteratorConverter(columns.iterator()).asScala().toSeq();
final Dataset<Row> join = df1.join(df2, columns_seq);
join.show()
join.withColumn("newColumn", abs(col("value1").minus(col("value2")))).show();
return join;
}
I call my code like this :
Dataset<Row> myNewDF = compute(MyDataset1, MyDataset2, Arrays.asList("field1","field2","field3","field4"));
Note : MyDataset1 and MyDataset2 are two datasets that come from the same Dataset MyDataset0 with multiple different transformations.
On the join.show() line, I get the following error :
2018-08-03 18:48:43 - ERROR main Logging$class - - - failed to compile: org.codehaus.commons.compiler.CompileException: File 'generated.java', Line 235, Column 21: Expression "project_isNull_2" is not an rvalue
org.codehaus.commons.compiler.CompileException: File 'generated.java', Line 235, Column 21: Expression "project_isNull_2" is not an rvalue
at org.codehaus.janino.UnitCompiler.compileError(UnitCompiler.java:11821)
at org.codehaus.janino.UnitCompiler.toRvalueOrCompileException(UnitCompiler.java:7170)
at org.codehaus.janino.UnitCompiler.getConstantValue2(UnitCompiler.java:5332)
at org.codehaus.janino.UnitCompiler.access$9400(UnitCompiler.java:212)
at org.codehaus.janino.UnitCompiler$13$1.visitAmbiguousName(UnitCompiler.java:5287)
at org.codehaus.janino.Java$AmbiguousName.accept(Java.java:4053)
...
2018-08-03 18:48:47 - WARN main Logging$class - - - Whole-stage codegen disabled for plan (id=7):
But it does not stop the execution and still displays the content of the dataset.
Then, on the line join.withColumn("newColumn", abs(col("value1").minus(col("value2")))).show();
I get the error :
Exception in thread "main" org.apache.spark.sql.AnalysisException: Resolved attribute(s) 'value2,'value1 missing from field6#16,field7#3,field8#108,field5#0,field9#4,field10#28,field11#323,value1#298,field12#131,day#52,field3#119,value2#22,field2#35,field1#43,field4#144 in operator 'Project [field1#43, field2#35, field3#119, field4#144, field5#0, field6#16, value2#22, field7#3, field9#4, field10#28, day#52, field8#108, field12#131, value1#298, field11#323, abs(('value1 - 'value2)) AS newColumn#2579]. Attribute(s) with the same name appear in the operation: value2,value1. Please check if the right attribute(s) are used.;;
'Project [field1#43, field2#35, field3#119, field4#144, field5#0, field6#16, value2#22, field7#3, field9#4, field10#28, day#52, field8#108, field12#131, value1#298, field11#323, abs(('value1 - 'value2)) AS newColumn#2579]
+- AnalysisBarrier
...
This error end the program.
The workaround proposed Mijung Kim on the Jira Issue is to create a Dataset clone thanks to toDF(Columns). But in my case, where the column names used for the join are not known in advance (I only have a List), I can't use this workaround.
Is there another way to get around this very annoying bug ?
Try to call this method:
private static Dataset<Row> cloneDataset(Dataset<Row> ds) {
List<Column> filterColumns = new ArrayList<>();
List<String> filterColumnsNames = new ArrayList<>();
scala.collection.Iterator<StructField> it = ds.exprEnc().schema().toIterator();
while (it.hasNext()) {
String columnName = it.next().name();
filterColumns.add(ds.col(columnName));
filterColumnsNames.add(columnName);
}
ds = ds.select(JavaConversions.asScalaBuffer(filterColumns).seq()).toDF(scala.collection.JavaConverters.asScalaIteratorConverter(filterColumnsNames.iterator()).asScala().toSeq());
return ds;
}
on both datasets just before the join like this :
df1 = cloneDataset(df1);
df2 = cloneDataset(df2);
final Dataset<Row> join = df1.join(df2, columns_seq);
// or ( based on Nakeuh comment )
final Dataset<Row> join = cloneDataset(df1.join(df2, columns_seq));

How can I remove the output layer of pre-trained model with TensorFlow java api?

I have the pre-trained model like Inception-v3. I want to remove the output layer and use it in image cognition. Here is the example given by tensorflow:
Just like the python framework Keras, it has a method like model.layers.pop(). I tried do it with tensorflow java api. First I tried to use dl4j, but when I imported the keras model, I got an error like this:
2017-06-15 21:15:43 INFO KerasInceptionV3Net:52 - Importing Inception model from data/inception-model.json
2017-06-15 21:15:43 INFO KerasInceptionV3Net:53 - Importing Weights model from data/inception_v3_complete
Exception in thread "main" java.lang.RuntimeException: Unknown exception.
at org.bytedeco.javacpp.hdf5$H5File.allocate(Native Method)
at org.bytedeco.javacpp.hdf5$H5File.<init>(hdf5.java:12713)
at org.deeplearning4j.nn.modelimport.keras.Hdf5Archive.<init>(Hdf5Archive.java:61)
at org.deeplearning4j.nn.modelimport.keras.KerasModel$ModelBuilder.weightsHdf5Filename(KerasModel.java:603)
at org.deeplearning4j.nn.modelimport.keras.KerasModelImport.importKerasModelAndWeights(KerasModelImport.java:176)
at edu.usc.irds.dl.dl4j.examples.KerasInceptionV3Net.<init>(KerasInceptionV3Net.java:55)
at edu.usc.irds.dl.dl4j.examples.KerasInceptionV3Net.main(KerasInceptionV3Net.java:108)
HDF5-DIAG: Error detected in HDF5 (1.10.0-patch1) thread 0:
#000: C:\autotest\HDF5110ReleaseRWDITAR\src\H5F.c line 579 in H5Fopen(): unable to open file
major: File accessibilty
minor: Unable to open file
#001: C:\autotest\HDF5110ReleaseRWDITAR\src\H5Fint.c line 1100 in H5F_open(): unable to open file: time = Thu Jun 15 21:15:44 2017,name = 'data/inception_v3_complete', tent_flags = 0
major: File accessibilty
minor: Unable to open file
#002: C:\autotest\HDF5110ReleaseRWDITAR\src\H5FD.c line 812 in H5FD_open(): open failed
major: Virtual File Layer
minor: Unable to initialize object
#003: C:\autotest\HDF5110ReleaseRWDITAR\src\H5FDsec2.c line 348 in H5FD_sec2_open(): unable to open file: name = 'data/inception_v3_complete', errno = 2, error message = 'No such file or directory', flags = 0, o_flags = 0
major: File accessibilty
minor: Unable to open file
So I went back to tensorflow. I'm going to modify the model in keras and convert the model to tensor. Here is my conversion script:
input_fld = './'
output_node_names_of_input_network = ["pred0"]
write_graph_def_ascii_flag = True
output_node_names_of_final_network = 'output_node'
output_graph_name = 'test2.pb'
from keras.models import load_model
import tensorflow as tf
import os
import os.path as osp
from keras.applications.inception_v3 import InceptionV3
from keras.applications.vgg16 import VGG16
from keras.models import Sequential
from keras.layers.core import Flatten, Dense, Dropout
from keras.layers.convolutional import Convolution2D, MaxPooling2D, ZeroPadding2D
from keras.optimizers import SGD
output_fld = input_fld + 'tensorflow_model/'
if not os.path.isdir(output_fld):
os.mkdir(output_fld)
net_model = InceptionV3(weights='imagenet', include_top=True)
num_output = len(output_node_names_of_input_network)
pred = [None]*num_output
pred_node_names = [None]*num_output
for i in range(num_output):
pred_node_names[i] = output_node_names_of_final_network+str(i)
pred[i] = tf.identity(net_model.output[i], name=pred_node_names[i])
print('output nodes names are: ', pred_node_names)
from keras import backend as K
sess = K.get_session()
if write_graph_def_ascii_flag:
f = 'only_the_graph_def.pb.ascii'
tf.train.write_graph(sess.graph.as_graph_def(), output_fld, f, as_text=True)
print('saved the graph definition in ascii format at: ', osp.join(output_fld, f))
from tensorflow.python.framework import graph_util
from tensorflow.python.framework import graph_io
constant_graph = graph_util.convert_variables_to_constants(sess, sess.graph.as_graph_def(), pred_node_names)
graph_io.write_graph(constant_graph, output_fld, output_graph_name, as_t ext=False)
print('saved the constant graph (ready for inference) at: ', osp.join(output_fld, output_graph_name))
I got the model as .pb file, but when I put it into the tensor example, The LabelImage example, I got this error:
Exception in thread "main" java.lang.IllegalArgumentException: You must feed a value for placeholder tensor 'batch_normalization_1/keras_learning_phase' with dtype bool
[[Node: batch_normalization_1/keras_learning_phase = Placeholder[dtype=DT_BOOL, shape=<unknown>, _device="/job:localhost/replica:0/task:0/cpu:0"]()]]
at org.tensorflow.Session.run(Native Method)
at org.tensorflow.Session.access$100(Session.java:48)
at org.tensorflow.Session$Runner.runHelper(Session.java:285)
at org.tensorflow.Session$Runner.run(Session.java:235)
at com.dlut.cmh.sheng.LabelImage.executeInceptionGraph(LabelImage.java:98)
at com.dlut.cmh.sheng.LabelImage.main(LabelImage.java:51)
I don't know how to solve this. Can anyone help me? Or you have another way to do this?
The error message you get from the TensorFlow Java API:
Exception in thread "main" java.lang.IllegalArgumentException: You must feed a value for placeholder tensor 'batch_normalization_1/keras_learning_phase' with dtype bool
[[Node: batch_normalization_1/keras_learning_phase = Placeholder[dtype=DT_BOOL, shape=<unknown>, _device="/job:localhost/replica:0/task:0/cpu:0"]()]]
suggests that the model is constructed in a way that requires you to feed a boolean value for the tensor named batch_normalization_1/keras_learning_phase.
So, you'd have to include that in your call to run by changing:
try (Session s = new Session(g);
Tensor result = s.runner().feed("input",image).fetch("output").run().get(0)) {
to something like:
try (Session s = new Session(g);
Tensor learning_phase = Tensor.create(false);
Tensor result = s.runner().feed("input", image).feed("batch_normalization_1/keras_learning_phase", learning_phase).fetch("output").run().get(0)) {
The names of nodes you feed and fetch depend on the model, so it's possible that the names of the 'input' and 'output' nodes are different as well.
You might also want to consider using the TensorFlow SavedModel format (see also https://github.com/tensorflow/serving/issues/310#issuecomment-297015251)
Hope that helps

Groovys XmlParser ignores CDATA CR/CL

I want to parse a log4j generated xml log. Within the xml is a node with a throwable (if any). This (multiline, tabbed) text is encapsulated in a CDATA tag.
This is an excerpt of the whole file:
<log4j:event logger="org.codehaus.groovy.grails.web.errors.GrailsExceptionResolver" timestamp="1330083921521" level="ERROR" thread="http-8080-1">
<log4j:message><![CDATA[Exception occurred when processing request: [GET] /test/log/show
Stacktrace follows:]]></log4j:message>
<log4j:throwable><![CDATA[org.xml.sax.SAXParseException: XML document structures must start and end within the same entity.
at com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.parse(AbstractSAXParser.java:1231)
at com.sun.org.apache.xerces.internal.jaxp.SAXParserImpl$JAXPSAXParser.parse(SAXParserImpl.java:522)
at test.LogController$_closure2.doCall(LogController.groovy:21)
at test.LogController$_closure2.doCall(LogController.groovy)
at java.lang.Thread.run(Thread.java:662)
]]></log4j:throwable>
</log4j:event>
I parse it with groovys XmlParser:
def parser = new XmlParser(false, false).parse(new File("stack.log"))
return parser.'log4j:event'.collect { l ->
LogEntry entry = new LogEntry()
entry.with {
level = l.'#level'
message = l.'log4j:message'.text()
thread = l.'#thread'
logger = l.'#logger'
timestamp = new Date(l.'#timestamp' as long)
throwable = l.'log4j:throwable'?.text() ?: ''
}
entry
}
The 'throwable' field contains all the text but without CR/LF.
Does anybody know how to cope with that?
Thanks in advcance...
Hate to just throw code at you, but it seems to work as expected and returns the CRLFs
def xml = '''<log>
| <log4j:event logger="org.codehaus.groovy.grails.web.errors.GrailsExceptionResolver" timestamp="1330083921521" level="ERROR" thread="http-8080-1">
| <log4j:message><![CDATA[Exception occurred when processing request: [GET] /test/log/show
|Stacktrace follows:]]></log4j:message>
| <log4j:throwable><![CDATA[org.xml.sax.SAXParseException: XML document structures must start and end within the same entity.
| at com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.parse(AbstractSAXParser.java:1231)
| at com.sun.org.apache.xerces.internal.jaxp.SAXParserImpl$JAXPSAXParser.parse(SAXParserImpl.java:522)
| at test.LogController$_closure2.doCall(LogController.groovy:21)
| at test.LogController$_closure2.doCall(LogController.groovy)
| at java.lang.Thread.run(Thread.java:662)
|]]></log4j:throwable>
| </log4j:event>
|</log>'''.stripMargin()
class LogEntry {
def level
def message
def thread
def logger
def timestamp
def throwable
String toString() {
"""EVENT:
| level : $level
| message : $message
| thread : $thread
| logger : $logger
| ts : $timestamp
| thrown : $throwable""".stripMargin()
}
}
def parser = new XmlParser(false, false).parseText( xml )
def entries = parser.'log4j:event'.collect { event ->
new LogEntry().with {
level = event.#level
message = event.'log4j:message'.text()
thread = event.#thread
logger = event.#logger
timestamp = new Date( event.#timestamp as long )
throwable = event.'log4j:throwable'?.text() ?: ''
it
}
}
entries.each {
println it
}
That prints:
EVENT:
level : ERROR
message : Exception occurred when processing request: [GET] /test/log/show
Stacktrace follows:
thread : http-8080-1
logger : org.codehaus.groovy.grails.web.errors.GrailsExceptionResolver
ts : Fri Feb 24 11:45:21 GMT 2012
thrown : org.xml.sax.SAXParseException: XML document structures must start and end within the same entity.
at com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.parse(AbstractSAXParser.java:1231)
at com.sun.org.apache.xerces.internal.jaxp.SAXParserImpl$JAXPSAXParser.parse(SAXParserImpl.java:522)
at test.LogController$_closure2.doCall(LogController.groovy:21)
at test.LogController$_closure2.doCall(LogController.groovy)
at java.lang.Thread.run(Thread.java:662)
Which has CRLF chars in it where they are supposed to be...
This is with Groovy 1.8.6 btw... What version are you using? Can you upgrade and try again?
The xml standard calls for white space to be normalized during the parse.
I'm not sure, but the parser may have a setting to override this behavior. Otherwise, you could pre-process the file, replacing line endings inside c data sections with their xml entity equivalents, and then parse it.

Categories

Resources