Berkeley JE StoredMap : replace existing value without loading previous one - java

I want to update a StoredMap value and I don't care about the old value. I cannot find a way to avoid the previous value from being loaded.
StoredMap<Integer, SyntaxDocument> tsCol = new StoredMap<Integer, SyntaxDocument>(tsdb, new IntegerBinding(), new PassageBinding(), true);
tsCol.put(1, doc); // insert value => ok
tsCol.put(1, doc); // <- load previous value but I don't care. I want to avoid the "heavy" PassageBinding process.
tsCol.putAll(Collections.singletonMap(1, doc)); // Even this one load the old value
Is there a way to optimize my code and update an existing value without loading it (or at least to prevent the binding to process old DatabaseEntry bytes)?
NOTE: that calling remove then put is slower.

Solution is to use lowlevel Database API :
Database tsdb = environment.openDatabase(null, "tsdb", dbconfig);
PassageBinding binding = new PassageBinding();
DatabaseEntry idDbEntry = new DatabaseEntry();
IntegerBinding.intToEntry(id, idDbEntry);
DatabaseEntry dbEntry = new DatabaseEntry();
pb.objectToEntry(data, dbEntry);
tsdb.put(null, idDbEntry, dbEntry); // <-- replace existing value without loading it.

Related

Retreving only latest record in iteration after adding multiple values to list using java

I'm trying to add multiple records to a list and iterate. But its displaying only latest records added
Here is my code
List<ExportBean> exportBeans = new ArrayList<ExportBean>();
ExportBean exportBean = new ExportBean();
exportBean.setBooleanValue(true);
exportBean.setKeyValue("PRE_APPROVED_OFFER");
exportBean.setStringValue("111");
exportBeans.add(exportBean);
exportBean.setBooleanValue(true);
exportBean.setKeyValue("PRE_APPROVED_OFFER1");
exportBean.setStringValue("222");
exportBeans.add(exportBean);
getLopRefNo(exportBeans);
when I iterate it
def getLopRefNo = {
exportBeans->
println "in function ${exportBeans}"
}
It shows only
in function [ExportMessagingBean{stringValue='222', keyValue='PRE_APPROVED_OFFER1', exportBoolean=true}, ExportMessagingBean{stringValue='222', keyValue='PRE_APPROVED_OFFER1', exportBoolean=true}]
It doesnt show the first record added. Is it missing anything?
The problem has nothing to do with Groovy. In your code, you are not actually adding two objects, you are adding one object and modifying it.
List<ExportBean> exportBeans = new ArrayList<ExportBean>();
ExportBean exportBean = new ExportBean();
exportBean.setBooleanValue(true);
exportBean.setKeyValue("PRE_APPROVED_OFFER");
exportBean.setStringValue("111");
exportBeans.add(exportBean); // add object to list
exportBean.setBooleanValue(true);
exportBean.setKeyValue("PRE_APPROVED_OFFER1");
exportBean.setStringValue("222");
exportBeans.add(exportBean); // this time, the same reference is "added". This does not result in an addition (in fact, "add" will return false here
getLopRefNo(exportBeans);
You are calling add with an object that is already present in the list so it has no effect. What you should do is create another instance of ExportBean, like this:
List<ExportBean> exportBeans = new ArrayList<ExportBean>();
ExportBean exportBean = new ExportBean();
exportBean.setBooleanValue(true);
exportBean.setKeyValue("PRE_APPROVED_OFFER");
exportBean.setStringValue("111");
exportBeans.add(exportBean); // add object to list
exportBean = new ExportBean(); //create new instance of ExportBean
exportBean.setBooleanValue(true);
exportBean.setKeyValue("PRE_APPROVED_OFFER1");
exportBean.setStringValue("222");
exportBeans.add(exportBean); // this new instance will be correctly added
getLopRefNo(exportBeans);
You only have one ExportBean object in your code (you only said new ExportBean() once) so you have added the same object to the list twice. Your second set of calls to the set methods on the bean just updates your existing object rather than creating a new one.

JHDF5 - How to avoid dataset being overwritten

I am using JHDF5 to log a collection of values to a hdf5 file. I am currently using two ArrayLists to do this, one with the values and one with the names of the values.
ArrayList<String> valueList = new ArrayList<String>();
ArrayList<String> nameList = new ArrayList<String>();
valueList.add("Value1");
valueList.add("Value2");
nameList.add("Name1");
nameList.add("Name2");
IHDF5Writer writer = HDF5Factory.configure("My_Log").keepDataSetsIfTheyExist().writer();
HDF5CompoundType<List<?>> type = writer.compound().getInferredType("", nameList, valueList);
writer.compound().write("log1", type, valueList);
writer.close();
This will log the values in the correct way to the file My_Log and in the dataset "log1". However, this example always overwrites the previous log of the values in the dataset "log1". I want to be able to log to the same dataset everytime, adding the latest log to the next line/index of the dataset. For example, if I were to change the value of "Name2" to "Value3" and log the values, and then change "Name1" to "Value4" and "Name2" to "Value5" and log the values, the dataset should look like this:
I thought the keepDataSetsIfTheyExist() option to would prevent the dataset to be overwritten, but apparently it doesn't work that way.
Something similar to what I want can be achieved in some cases with writer.compound().writeArrayBlock(), and specify by what index the array block shall be written. However, this solution doesn't seem to be compatible with my current code, where I have to use lists for handling my data.
Is there some option to achieve this that I have overlooked, or can't this be done with JHDF5?
I don't think that will work. It is not quite clear to me, but I believe the getInferredType() you are using is creating a data set with 2 name -> value entries. So it is effectively creating an object inside the hdf5. The best solution I could come up with was to read the previous values add them to the valueList before outputting:
ArrayList<String> valueList = new ArrayList<>();
valueList.add("Value1");
valueList.add("Value2");
try (IHDF5Reader reader = HDF5Factory.configure("My_Log.h5").reader()) {
String[] previous = reader.string().readArray("log1");
for (int i = 0; i < previous.length; i++) {
valueList.add(i, previous[i]);
}
} catch (HDF5FileNotFoundException ex) {
// Nothing to do here.
}
MDArray<String> values = new MDArray<>(String.class, new long[]{valueList.size()});
for (int i = 0; i < valueList.size(); i++) {
values.set(valueList.get(i), i);
}
try (IHDF5Writer writer = HDF5Factory.configure("My_Log.h5").writer()) {
writer.string().writeMDArray("log1", values);
}
If you call this code a second time with "Value3" and "Value4" instead, you will get 4 values. This sort of solution might become unpleasant if you start to have hierarchies of datasets however.
To solve your issue, you need to define the dataset log1 as extendible so that it can store an unknown number of log entries (that are generated over time) and write these using a point or hyperslab selection (otherwise, the dataset will be overwritten).
If you are not bound to a specific technology to handle HDF5 files, you may wish to give a look at HDFql which is an high-level language to manage HDF5 files easily. A possible solution for your use-case using HDFql (in Java) is:
public class Example
{
public Class Log
{
String name1;
String name2;
}
public boolean doSomething(Log log)
{
log.name1 = "Value1";
log.name2 = "Value2";
return true;
}
public static void main(String args[])
{
// declare variables
Log log = new Log();
int variableNumber;
// create an HDF5 file named 'My_Log.h5' and use (i.e. open) it
HDFql.execute("CREATE AND USE FILE My_Log.h5");
// create an extendible HDF5 dataset named 'log1' of data type compound
HDFql.execute("CREATE DATASET log1 AS COMPOUND(name1 AS VARCHAR, name2 AS VARCHAR)(0 TO UNLIMITED)");
// register variable 'log' for subsequent usage (by HDFql)
variableNumber = HDFql.variableRegister(log);
// call function 'doSomething' that does something and populates variable 'log' with an entry
while(doSomething(log))
{
// alter (i.e. extend) dataset 'log1' to +1 (i.e. add a new row)
HDFql.execute("ALTER DIMENSION log1 TO +1");
// insert (i.e. write) data stored in variable 'log' into dataset 'log1' using a point selection
HDFql.execute("INSERT INTO log1(-1) VALUES FROM MEMORY " + variableNumber);
}
}
}

Inc/dec mongotemplate, atomically

I am trying to update one value of my document atomically using findAndModify, which according to my reading is atomic in the same document. According to my Unit test the values are not modified.
I'm using mongoTemplate in Java, and my code looks like
public OfferConfiguration IncreaseStock(OfferConfiguration offerConfiguration) {
Query query = new Query(Criteria.where("_id").is(offerConfiguration.getId()));
Update update = new Update().inc("stock", 1);
return mongoTemplate.findAndModify(query, update, OfferConfiguration.class);
}
public OfferConfiguration findAndDecreaseStock(String offerId ) {
Query query = new Query(Criteria.where("_id").is(offerId).and("stock").gt(0));
Update update = new Update().inc("stock", -1);
return mongoTemplate.findAndModify(query, update, OfferConfiguration.class);
}
Stock has type Long, and I can see that when I use a criteria in the find:
Query query = new Query(Criteria.where("_id").is(offerId).and("stock").gt(0));
return mongoTemplate.findOne(query, OfferConfiguration.class);
It returns only the values whose stock is greater than 0.
Any idea what is wrong in my code?
FindAndModify will return the original document after making the update to it by default.
If you want to get back the modified document you have to pass it the optional new option.
It seems you already found that the way to do that is by adding returnNew(true) to the findAndModify command.

Trying to compare a HashSet element with an element in a List

I have a HashSet that I created and this is what it contains. It will contain more later on, this is pasted from standard out when I did a toString on it. Just to show the contents.
foo.toString(): Abstractfoo [id=2, serial=1d21d, value=1.25, date=2012-09-02 12:00:00.0]
INFO [STDOUT] price.toString(): Abstractfoo [id=1, serial=1d24d, value=1.30, date=2012-09-19 12:00:00.0]
I have a List that I also have and I need to compare the two. One of the elements in List is:
Bar.toString(): Bar [id=1d21d, name=Dell, description=Laptop, ownerId=null]
Here is what I am trying to do...
Bar contains all of the elements I want foo to have. There will only be one unique serial. I would like my program to see if an element in the list that is in HashSet contains the id for bar. So serial == id.
Here is what I've been trying to do
Removed code and added clearer code below
I've verified the data is getting entered into the HashSet and List correctly by viewing it through the debugger.
foo is being pulled from a database through hibernate, and bar is coming from a different source. If there is an element in bar I need to add it to a list and I'm passing it back to my UI where I'll enter some additional data and then commit it to the database.
Let me know if this makes sense and if I can provide anymore information.
Thanks
EDIT: Here is the class
#RequestMapping(value = "/system", method = RequestMethod.GET)
public #ResponseBody
List<AbstractSystem> SystemList() {
// Retrieve system list from database
HashSet<AbstractSystem> systemData = new HashSet<AbstractSystem>(
systemService.getSystemData());
// Retrieve system info from cloud API
List<SystemName> systemName= null;
try {
systemName = cloudClass.getImages();
} catch (Exception e) {
LOG.warn("Unable to get status", e);
}
// Tried this but, iter2 only has two items and iter has many more.
// In production it will be the other way around, but I need to not
// Have to worry about that
Iterator<SystemName> iter = systemName.iterator();
Iterator<AbstractSystem> iter2 = systemData .iterator();
while(iter.hasNext()){
Image temp = iter.next();
while(iter2.hasNext()){
AbstractPricing temp2 = iter2.next();
System.out.println("temp2.getSerial(): " + temp2.getSerial());
System.out.println("temp.getId(): " + temp.getId());
if(temp2.getSerial().equals(temp.getId())){
System.out.println("This will be slow...");
}
}
}
return systemData;
}
If N is the number of items in systemName and M is the number of items in systemData, then you've effectively built an O(N*M) method.
If you instead represent your systemData as a HashMap of AbstractSystem by AbstractSystem.getSerial() values, then you just loop through the systemName collection and lookup by systemName.getId(). This becomes more like O(N+M).
(You might want to avoid variables like iter, iter2, temp2, etc., since those make the code harder to read.)
EDIT - here's what I mean:
// Retrieve system list from database
HashMap<Integer, AbstractSystem> systemDataMap = new HashMap<AbstractSystem>(
systemService.getSystemDataMap());
// Retrieve system info from cloud API
List<SystemName> systemNames = cloudClass.getImages();
for (SystemName systemName : systemNames) {
if (systemDataMap.containsKey(systemName.getId()) {
System.out.println("This will be slow...");
}
}
I used Integer because I can't tell from your code what the type of AbstractSystem.getSerial() or SystemName.getId() are. This assumes that you store the system data as a Map elsewhere. If not, you could construct the map yourself here.

how can i handle Java pass-by-value? i need pass-by-reference

I have such piece of code:
DataTableFactory<Object> TempDataTableFactory = new DataTableFactory<Object>();
DataTable<Object> tempDataTable = TempDataTableFactory.getInstance();
tempDataTable = dataTable;
ExecutedArguments e = new ExecutedArguments();
e.setDataTable(tempDataTable);
e.setExecutedCommand(cmd);
stack.addNewExecutedCommand(e);
result = operation.execute();
Now I just want to keep the olddataTable before execution. When I debug my code till the line result = operation.execute(); there is no problem. In that line I change the dataTable. But because tempDataTable points to dataTable it also changes. But I don't want tempDataTable to change. How can I do this?
If it supports cloning, I would use tempDataTable.clone(). Otherwise you'll have to implement a copy constructor.

Categories

Resources