So I have an Object that comes in that can be any of 100 different specific objects, with different elements inside it, from other objects, lists, sequences, primitives etc.
I want to strip the values in a depth first fashion to make a string of simple values with a delimiter between them. I have mapped the fields and stored them elsewhere using recursion/reflection that only happens once a new Object type comes in for the first time.
An example of how I'm storing the data in the database for a few simple example objects:
Object A layout table: Timestamp = 12345 Fields = Length|Width|Depth
Object B layout table: Timestamp = 12345 Fields = Height|Weight|Name
Object A layout table: Timestamp = 12350 Fields = Length|Width|Depth|Label
Object A sample: Timestamp = 12348 Values = 5|7|2
Object A sample: Timestamp = 12349 Values = 4|3|1
Object B sample: Timestamp = 12346 Values = 75|185|Steve Irwin
Object A sample: Timestamp = 12352 Values = 7|2|8|HelloWorld
Below is my current solution. I'm seeking improvements or alternatives to the design to accomplish the goal stated above.
Currently I get the object in and translate it to JSON using gson.toJson(); From that, I cycle through the JSON to get values using the code below. Issue is, this code is very CPU intensive on the low end CPU I am developing for due to the fact that there are many samples coming in per second. Overall purpose of the application is a data recorder that records real time samples into a SQLite database. I have also attempted to store the unmodified JSON into a SQLite BLOB column, but this is terribly inefficient with regards to DB size. Is there a better/more efficient method for getting values out of an object?
I don't have an issue storing the field mapping since it only needs to be done once, but the value stripping needs to be done for every sample. I know you can do it via reflection as well, but that is also processing heavy. Anyone have a better method?
public static List<String> stripValuesFromJson(JsonElement json)
{
// Static array list that will have the values added to it. This will
// be the return object
List<String> dataList = new ArrayList<String>();
// Iterate through the JSONElement and start parsing out values
for (Entry<String, JsonElement> entry : ((JsonObject) json).entrySet())
{
// Call the recursive processor that will parse out items based on their individual type: primitive, array, seq etc
dataList.addAll(dataParser(entry.getValue()));
}
return dataList;
}
/**
* The actual data processor that parses out individual values and deals with every possible type of data that can come in.
*
* #param json - The json object being recursed through
* #return - return the list of values
*/
public static List<String> dataParser(JsonElement json)
{
List<String> dataList = new ArrayList<String>();
// Deal with primitives
if (json instanceof JsonPrimitive)
{
// Deal with items that come up as true/false.
if (json.getAsString().equals("false"))
{
dataList.add("0");
} else if (json.getAsString().equals("true"))
{
dataList.add("1");
} else
{
dataList.add(json.getAsString());
}
// Send through recursion to get the primitives or objects out of this object
} else if (json instanceof JsonObject)
{
dataList.addAll(stripValuesFromJson(json));
} else if (json instanceof JsonArray)
{
// Send through recursion for each element in this array/sequence
for (JsonElement a : (JsonArray) json)
{
dataList.addAll(dataParser(a));
}
} else if (json instanceof JsonNull)
{
dataList.add(null);
} else
{
errorLog.error("Unknown JSON type: " + json.getClass());
}
return dataList;
}
One thing you could try out is writing your own JSON parser which simply emits values. I have more experience in JavaCC so I'd take one of existing JSON grammars and modify it so that it only outputs values. This should not be too complicated.
Take for example the booleanValue production from the mentioned grammar:
Boolean booleanValue(): {
Boolean b;
}{
(
(
<TRUE>
{ b = Boolean.TRUE; }
) | (
<FALSE>
{ b = Boolean.FALSE; }
)
)
{ return b; }
}
Basically you will need to replace returning the boolean value with appending "1" or "0" to the target list.
ANTLR is another option.
Related
Here is my sample code.In this example has only elementary types,no structure types has to set.But in the output no data exists in the table.
When I check the records in SAP it contains multiple records for this particular id.Can someone explain this to me?
public void invokeRFC(JCoDestination destination) {
JCoFunction function=null;
try
{
JCoFunctionTemplate functionTemplate = destination.getRepository().getFunctionTemplate("RFC_METHOD");
if (functionTemplate != null) {
function = functionTemplate.getFunction();
}
if (function == null)
throw new RuntimeException("Not found in SAP.");
//to fill elementary types and structures
configureImportParameters(function,"xxx", "abc");
//to fill table type parameters
configureTableParameters(function, "tblName",1,"100");
function.execute(destination);
} catch (JCoException e)
{
e.printStackTrace();
}
}
public void configureTableParameters(JCoFunction function, String table_name, int index, String id) {
JCoTable table = function.getTableParameterList().getTable("table_name");
table.appendRow();
table.setRow(index);
table.setValue("Partner", "100");
}
private void exportTable(JCoFunction jCoFunction, String tblName) {
JCoTable resultTable = jCoFunction.getTableParameterList().getTable(tblName);
int value = resultTable.getNumRows();
System.out.println(value);
}
private void configureImportParameters(JCoFunction function, String param1, String param2) {
JCoParameterList parameterList =
function.getImportParameterList();
parameterList.setValue("field1", param1);
parameterList.setValue("field2", param2);
}
UPDATED the code.
multiple problem can cause this.
if you setting "" or " " to fields. (when you set values better set if those has some values
if it says partner does not exist and if you sure its exist this mean your data does not pass properly. add debug points to where you set the data and make sure you pass correct name and correct values.
also you do not need to add(index) you can just table.appendRow(); // but this will not impact on your case
also when you setValue make sure its int filed. (normally not) in your given example its int
eg:
private void configureTableParameters(JCoParameterList tableParameters){
JCoTable jCoTable=tableParameters.getTable(key);
jCoTable.appendRow();
if(value!=null)
jCoTable.setValue(fieldKey,String.valueOf(value));
}
this is just psuda code and will not work
Test your ABAP remote function module with an SAP GUI via transaction code SE37 first.
If this test is successful and you get a different result if called from JCo with using the same parameter values, then I recommend to study SAP note 206068 for possible reasons.
Also check your method configureTableParameters. I guess, index shall be a field index and not a row count. Your implementation will create far too many unnecessary rows. I assume you wanted to call table.appendRow(); instead of table.appendRows(index);. Moreover, you maybe intended to fill the first field in the row with the value "100", for which you would have to pass the index value 0 instead of 1 in this case.
I am using google GSON API to parse a JSON file for my Android project but I have an issue of performance.
Here is the source code I use for parsing the JSON with google GSON API :
public void loadJsonInDb(String path){
InputStream isJson = context.getAssets().open(path);
if (isJson != null) {
int sizeJson = isJson.available();
byte[] bufferJson = new byte[sizeJson];
isJson.read(bufferJson);
isJson.close();
String jsonStr = new String(bufferJson, "UTF-8");
JsonParser parser = new JsonParser();
JsonObject object = parser.parse(jsonStr).getAsJsonObject();
JsonArray array = object.getAsJsonArray("datas");
Gson gson = new Gson();
for(JsonElement jsonElement : array){
MyEntity entity = gson.fromJson(jsonElement, MyEntity.class);
// Do insert into Db stuffs
}
}
}
The problem with this is that after parsing I have to go through the JsonArray with a for loop and perform the desired action (which is an insertion in SQLite DB with ORMLite of each element in the array), I would like to know if it is possible to perform insertion on the flight during the parsing, instead of waiting for the the array to be computed. I have seen in documentation that maybe JsonStreamParser can do the job but I am not how to use it.
I have a few notes regarding the use of Gson and other stuff.
You should close I/O resources in finally blocks to ensure you don't have resource leaks (available and read may throw an exception that prevents the resource from being closed). (Also I'm not sure if using available is a good idea here.)
You just don't have to use Strings in this case. Strings are generally a performance/memory killer for such a scenario (much depends on their result sizes) since strings are accumulated into memory, thus you lose your on-fly idea having it's all collected into memory first. In worst cases, it can finish up your application with OutOfMemoryError.
You can read input streams with a specified encoding, so no string-buffering is necessary.
JsonParser is designed to return JSON trees: JsonElement contains the whole JSON tree in memory. Sounds similar to the strings case above, right? Another performance penalty here.
Creating Gson instances may be somewhat expensive (depending on how to compare, of course), and you can instantiated it once: it's thread safe.
JsonStreamParser is not an option too, because each next() will produce another JSON tree branch in memory (again, depends on how big are your JSON documents and its $.datas array and its elements).
Gson.fromJson uses lookup to find the best type adapter, and you ask a Gson instance for a type adapter once, then not wasting time for lookups anymore. Type adapters are usually perfectly thread-safe too, thus can be cached.
Summarizing the above up, you could implement it as follows:
private static final Gson gson = new Gson();
private static final TypeAdapter<MyEntity> myEntityTypeAdapter = gson.getAdapter(MyEntity.class);
private static void loadJsonInDb(final String path)
throws IOException {
// Java 7 language features can be easily converted to Java 6 try/finally
// Note the way how you can decorate (wrap) everything: an input stream (byte streams) to a reader (character streams, UTF-8 here) to a JSON reader (more high-level character reader)
try ( final JsonReader jsonReader = new JsonReader(new InputStreamReader(context.getAssets().open(path), "UTF-8")) ) {
// Ensure that we're about to open the root object
jsonReader.beginObject();
// And iterate each object property
while ( jsonReader.hasNext() ) {
// And check it's name
final String name = jsonReader.nextName();
// Another Java 7 language feature
switch ( name ) {
// Is it datas?
case "datas":
// The consume it's opening array token
jsonReader.beginArray();
// And iterate each array element
while ( jsonReader.hasNext() ) {
// Read the current value as an MyEntity instance
final MyEntity myEntity = myEntityTypeAdapter.read(jsonReader);
// Now do what you want here
}
// "Close" the array
jsonReader.endArray();
break;
default:
// If it's something other than "datas" -- just skip the entire value -- Gson will do it efficiently (I hope, not sure)
jsonReader.skipValue();
break;
}
}
// "Close" the object
jsonReader.endObject();
}
}
Simply speaking, you just have to write a parser to consume each token. Now, having the following JSON document:
{
"object": {
},
"number": 2,
"array": [
],
"datas": [
{
"k": "v1"
},
{
"k": "v2"
},
{
"k": "v3"
}
]
}
the parser above would extract $.datas.* only consuming as less resources as possible. Substituting // Now do what you want here with System.out.println(myEntity.k); would produce:
v1
v2
v3
assuming that MyEntity is final class MyEntity{final String k=null;}. Note that you can process infinite JSON documents using this approach too.
I have 2 suggestions here:
Deserealize entire collection in 3 lines:
Gson gson = new Gson();
Type listType = new TypeToken<ArrayList<MyEntity>>(){}.getType();
List<MyEntity> listOf = gson.fromJson(jsonStr, listType);
When you got whole list of the entities use bulkInsert with single transaction. There you can get the idea how do use it
P.S.
To use bulkInsert you have to create list of ContentValues from your Entities.
I am using JHDF5 to log a collection of values to a hdf5 file. I am currently using two ArrayLists to do this, one with the values and one with the names of the values.
ArrayList<String> valueList = new ArrayList<String>();
ArrayList<String> nameList = new ArrayList<String>();
valueList.add("Value1");
valueList.add("Value2");
nameList.add("Name1");
nameList.add("Name2");
IHDF5Writer writer = HDF5Factory.configure("My_Log").keepDataSetsIfTheyExist().writer();
HDF5CompoundType<List<?>> type = writer.compound().getInferredType("", nameList, valueList);
writer.compound().write("log1", type, valueList);
writer.close();
This will log the values in the correct way to the file My_Log and in the dataset "log1". However, this example always overwrites the previous log of the values in the dataset "log1". I want to be able to log to the same dataset everytime, adding the latest log to the next line/index of the dataset. For example, if I were to change the value of "Name2" to "Value3" and log the values, and then change "Name1" to "Value4" and "Name2" to "Value5" and log the values, the dataset should look like this:
I thought the keepDataSetsIfTheyExist() option to would prevent the dataset to be overwritten, but apparently it doesn't work that way.
Something similar to what I want can be achieved in some cases with writer.compound().writeArrayBlock(), and specify by what index the array block shall be written. However, this solution doesn't seem to be compatible with my current code, where I have to use lists for handling my data.
Is there some option to achieve this that I have overlooked, or can't this be done with JHDF5?
I don't think that will work. It is not quite clear to me, but I believe the getInferredType() you are using is creating a data set with 2 name -> value entries. So it is effectively creating an object inside the hdf5. The best solution I could come up with was to read the previous values add them to the valueList before outputting:
ArrayList<String> valueList = new ArrayList<>();
valueList.add("Value1");
valueList.add("Value2");
try (IHDF5Reader reader = HDF5Factory.configure("My_Log.h5").reader()) {
String[] previous = reader.string().readArray("log1");
for (int i = 0; i < previous.length; i++) {
valueList.add(i, previous[i]);
}
} catch (HDF5FileNotFoundException ex) {
// Nothing to do here.
}
MDArray<String> values = new MDArray<>(String.class, new long[]{valueList.size()});
for (int i = 0; i < valueList.size(); i++) {
values.set(valueList.get(i), i);
}
try (IHDF5Writer writer = HDF5Factory.configure("My_Log.h5").writer()) {
writer.string().writeMDArray("log1", values);
}
If you call this code a second time with "Value3" and "Value4" instead, you will get 4 values. This sort of solution might become unpleasant if you start to have hierarchies of datasets however.
To solve your issue, you need to define the dataset log1 as extendible so that it can store an unknown number of log entries (that are generated over time) and write these using a point or hyperslab selection (otherwise, the dataset will be overwritten).
If you are not bound to a specific technology to handle HDF5 files, you may wish to give a look at HDFql which is an high-level language to manage HDF5 files easily. A possible solution for your use-case using HDFql (in Java) is:
public class Example
{
public Class Log
{
String name1;
String name2;
}
public boolean doSomething(Log log)
{
log.name1 = "Value1";
log.name2 = "Value2";
return true;
}
public static void main(String args[])
{
// declare variables
Log log = new Log();
int variableNumber;
// create an HDF5 file named 'My_Log.h5' and use (i.e. open) it
HDFql.execute("CREATE AND USE FILE My_Log.h5");
// create an extendible HDF5 dataset named 'log1' of data type compound
HDFql.execute("CREATE DATASET log1 AS COMPOUND(name1 AS VARCHAR, name2 AS VARCHAR)(0 TO UNLIMITED)");
// register variable 'log' for subsequent usage (by HDFql)
variableNumber = HDFql.variableRegister(log);
// call function 'doSomething' that does something and populates variable 'log' with an entry
while(doSomething(log))
{
// alter (i.e. extend) dataset 'log1' to +1 (i.e. add a new row)
HDFql.execute("ALTER DIMENSION log1 TO +1");
// insert (i.e. write) data stored in variable 'log' into dataset 'log1' using a point selection
HDFql.execute("INSERT INTO log1(-1) VALUES FROM MEMORY " + variableNumber);
}
}
}
I have a HashSet that I created and this is what it contains. It will contain more later on, this is pasted from standard out when I did a toString on it. Just to show the contents.
foo.toString(): Abstractfoo [id=2, serial=1d21d, value=1.25, date=2012-09-02 12:00:00.0]
INFO [STDOUT] price.toString(): Abstractfoo [id=1, serial=1d24d, value=1.30, date=2012-09-19 12:00:00.0]
I have a List that I also have and I need to compare the two. One of the elements in List is:
Bar.toString(): Bar [id=1d21d, name=Dell, description=Laptop, ownerId=null]
Here is what I am trying to do...
Bar contains all of the elements I want foo to have. There will only be one unique serial. I would like my program to see if an element in the list that is in HashSet contains the id for bar. So serial == id.
Here is what I've been trying to do
Removed code and added clearer code below
I've verified the data is getting entered into the HashSet and List correctly by viewing it through the debugger.
foo is being pulled from a database through hibernate, and bar is coming from a different source. If there is an element in bar I need to add it to a list and I'm passing it back to my UI where I'll enter some additional data and then commit it to the database.
Let me know if this makes sense and if I can provide anymore information.
Thanks
EDIT: Here is the class
#RequestMapping(value = "/system", method = RequestMethod.GET)
public #ResponseBody
List<AbstractSystem> SystemList() {
// Retrieve system list from database
HashSet<AbstractSystem> systemData = new HashSet<AbstractSystem>(
systemService.getSystemData());
// Retrieve system info from cloud API
List<SystemName> systemName= null;
try {
systemName = cloudClass.getImages();
} catch (Exception e) {
LOG.warn("Unable to get status", e);
}
// Tried this but, iter2 only has two items and iter has many more.
// In production it will be the other way around, but I need to not
// Have to worry about that
Iterator<SystemName> iter = systemName.iterator();
Iterator<AbstractSystem> iter2 = systemData .iterator();
while(iter.hasNext()){
Image temp = iter.next();
while(iter2.hasNext()){
AbstractPricing temp2 = iter2.next();
System.out.println("temp2.getSerial(): " + temp2.getSerial());
System.out.println("temp.getId(): " + temp.getId());
if(temp2.getSerial().equals(temp.getId())){
System.out.println("This will be slow...");
}
}
}
return systemData;
}
If N is the number of items in systemName and M is the number of items in systemData, then you've effectively built an O(N*M) method.
If you instead represent your systemData as a HashMap of AbstractSystem by AbstractSystem.getSerial() values, then you just loop through the systemName collection and lookup by systemName.getId(). This becomes more like O(N+M).
(You might want to avoid variables like iter, iter2, temp2, etc., since those make the code harder to read.)
EDIT - here's what I mean:
// Retrieve system list from database
HashMap<Integer, AbstractSystem> systemDataMap = new HashMap<AbstractSystem>(
systemService.getSystemDataMap());
// Retrieve system info from cloud API
List<SystemName> systemNames = cloudClass.getImages();
for (SystemName systemName : systemNames) {
if (systemDataMap.containsKey(systemName.getId()) {
System.out.println("This will be slow...");
}
}
I used Integer because I can't tell from your code what the type of AbstractSystem.getSerial() or SystemName.getId() are. This assumes that you store the system data as a Map elsewhere. If not, you could construct the map yourself here.
I am relatively new to struts and java. I have been trying to understand the following piece of code.
List<LabelValueBean> dbList = getCardProductList();
ConcurrentMap<Integer, ProductItem> ret = new ConcurrentHashMap<Integer, ProductItem>();
for (LabelValueBean lb : dbList) {
ProductItem pi = new ProductItem();
pi.setId(Integer.valueOf(lb.getId()));
pi.setCode(lb.getCode());
pi.setName(lb.getDescription());
LabelValueBeanAuxCol[] aux = lb.getLabelvaluebeanauxcol();
pi.setTypeProduct(Boolean.TRUE);
if (null != aux) {
for (LabelValueBeanAuxCol element : aux) {
if (null != element
&& "PRDCT_SVC_IND".equals(element.getName())) {
pi.setTypeProduct(Boolean.valueOf("Y".equals(element
.getValue())));
}
}
}
pi.setNeedSetup(Boolean.TRUE);
ret.put(pi.getId(), pi);
}
return Himms2LookupUtil
.<ConcurrentMap<Integer, ProductItem>> setValueInCache(
Himms2Names.CARD_SERVICE_PRODUCT_LIST, ret);
}
With repect to the code block around "PRDCT_SVC_IND", how would a name of the column be mapped to the labelvaluebean?
Though I have an idea on the concurrent map and the key value pair functionality, I am pretty unsure about most of the concepts here and have tried searching on the internet without much luck. I would want a more clearer overview of what the above lines actually mean(in general ofcourse), in terms of the concepts used here like the concurrenthashmap, list(labelvaluebean) etc.
Any inputs would be greatly appreciated.
Code is doing following things :-
1) Getting CardProductList in first line and storing reference in a dbList object as
List<LabelValueBean> dbList = getCardProductList();`
2) Creating ConcurrentMap of key value .
3) Start iterating CardProductList and perform following operations on each CardProductList object -
a) Crates ProductItem object.
b) setting CardProduct object values (id, code, name) into ProductItem object.
d) setting ProductItem.typeProduct to TRUE.
c) getting Labelvaluebeanauxcol and store it in a LabelValueBeanAuxCol[] array instance called aux.
d) now check if aux is not null then iterates aux array and checks if(elemet is not null AND element name IS EQUAL TO "PRDCT_SVC_IND"
THEN
set ProductItem.TypeProduct to True if element.value = Y
ELSE
set ProductItem.TypeProduct to FALE
e) set ProductItem.NeddSetup = TRUE
f) set ProductItem.id and ProductItem into ConcurrentMap.
4) Store ConcurrentMap in cache