I am perfectly able to add contacts one by one with following code:
ArrayList<ContentProviderOperation> ops = new ArrayList<ContentProviderOperation>();
ops.add(ContentProviderOperation.newInsert(RawContacts.CONTENT_URI)
.withValue(ContactsContract.RawContacts.ACCOUNT_TYPE, null)
.withValue(ContactsContract.RawContacts.ACCOUNT_NAME, null).build());
ops.add(ContentProviderOperation
.newInsert(Data.CONTENT_URI)
.withValueBackReference(Data.RAW_CONTACT_ID, 0)
.withValue(Data.MIMETYPE,
CommonDataKinds.StructuredName.CONTENT_ITEM_TYPE)
.withValue(StructuredName.GIVEN_NAME, "Hello")
.withValue(StructuredName.FAMILY_NAME, "World").build());
try {
getContentResolver().applyBatch(ContactsContract.AUTHORITY, ops);
} catch (RemoteException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (OperationApplicationException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
However, when I try to add about 500 contacts one by one - it takes few minutes, which is too long for my app. Is the any faster way to add several contacts?
Why not make the arraylist a global that can be accessed from any activity I wouldn't insert that much into a Bundle as there more going on when you do, it was only meant to pass small amounts info. I would do it like this, making sure to call this in the manifest too..
public class MyStates extends Application {
private ArrayList<ContentProviderOperation> ops = new ArrayList<ContentProviderOperation>();
public ArrayList getList() {
return this.blueToothAdapter;
}
public void setList(ArrayList<ContentProviderOperation> o) {
this.ops= o;
}
You can use the same function you are using to add multiple contacts in a single batch operation by making small modifications.
You can add upto 500 operations to a single batch operation, you can keep on including the back-reference in Data Uri operation with the corresponding index of the raw_contacts insert operation.
Related
I'm currently working on a frontend for visualizing the results out ouf some searches in foreign systems. At the moment the programm is asking one system by another and only continues, when alle foreign systems have answered.
The frontend is written in Vaadin 13 and this should be able to refresh the page by push.
I have six controller classes for six foreign systems to question and want to start all questions at the same time without having to wait for the privious controller to finish.
My problem is that I can't find a tutorial which helps me with this special problem. All tutorials are about starting the same process for more than once but at the same time.
This is how I start the searches at the moment:
public static void performSingleSearch(ReferenceSystem referenceSystem, String searchField, List<String> searchValues, SystemStage systemStage) throws Exception {
if(!isAvailable(referenceSystem, systemStage)) return;
Map<String, ReferenceElement> result = new HashMap<>();
try {
Class<?> classTemp = Class.forName(referenceSystem.getClassname());
Method method = classTemp.getMethod("searchElements", String.class , List.class, SystemStage.class);
result = (Map<String, ReferenceElement>) method.invoke(classTemp.newInstance(), searchField, searchValues, systemStage);
} catch (Exception e) {
return;
}
if(result != null) orderResults(result, referenceSystem);
}
I hope you can provide me an tutorial on how to, or better a book over multithreading.
Best regards
Daniel
Seems to me the simplest approach is using CompletableFuture. Ignoring your atrocious use of reflection, I'm going to assume
interface ReferenceSystem {
public Map<String,ReferenceElement> searchElements(List<String> args);
}
List<ReferenceSystem> systems = getSystems();
List<String> searchArguments = getSearchArguments();
so you can do
List<CompletableFuture<Map<String, ReferenceElement>>> futures = new ArrayList<>();
for (ReferenceSystem system : systems) {
futures.add(CompletableFuture.supplyAsync(() -> system.searchElements(searchArguments)));
}
or with Java 8 Streams
List<CompletableFuture<Map<String, ReferenceElement>>> futures =
systems.stream()
.map(s -> CompletableFuture.supplyAsync(
() -> system.searchElements(searchArguments)))
.collect(Collectors.toList());
Now the futures contains a list of futures which will eventually return the Map you're looking for; you can access them with #get() which will block until the result is present:
for (CompletableFuture<Map<String,ReferenceElement>> future : futures) {
System.out.printf("got a result: %s%n", future.get());
}
With your primitive case all you would need is either list of threads and just wait on them to finish or even easier, use thread pool and use that:
private static ExecutorService service = Executors.newFixedThreadPool(6); // change to whatever you want
public static void someMethod() {
queueActions(Arrays.asList(
() -> {
try {
performSingleSearch(null, null, null, null); // fill your data
} catch (Exception e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
},
() -> {
try {
performSingleSearch(null, null, null, null); // fill your data #2 etc
} catch (Exception e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
));
}
public static void queueActions(List<Runnable> actions) {
Semaphore wait = new Semaphore((-actions.size()) + 1);
for (Runnable action : actions) {
service.execute(() -> {
try {
action.run();
} finally {
wait.release();
}
});
}
try {
wait.acquire();
} catch (InterruptedException e) {
}
}
The question remains whether you want to orders be executed at the same time or one at a time or something else (join orders into one big order etc).
Here is my code:
whatever exception it throws I don't want to catch it outside, I want to continue my loop again by handling it separately. I don't want to use another try catch inside this try catch. Can someone guide me on this?
I don't want to use another try catch inside this try catch.
Yes you do.
MarketplaceBO marketplaceBOObject = new MarketplaceBO(entity.getMarketplaceID());
try {
marketplaceBOObject.loadFromSable();
} catch (WhateverException e) {
// Do something here, or, if you prefer, add the exception to a list and process later
doSomething() ;
// Continue your loop above
continue ;
}
if (marketplaceBOObject.isActive()) {
If you REALLY don't want to do this, your loadFromSable() method could return some object that provides information about success/failure of the call. But I wouldn't recommend that.
do this way -- this way your rest of the code will run no matter there is an exception or not
for (MerchantMarketplaceBO entity : merchantMarketplaceBOList) {
MarketplaceBO marketplaceBOObject = new MarketplaceBO(entity.getMarketplaceID());
try{
marketplaceBOObject.loadFromSable();
if (marketplaceBOObject.isActive()) {
resultVector.add(marketplaceBOObject.getCodigoMarketplace());
}
}
catch{
if (marketplaceBOObject.isActive()) {
resultVector.add(marketplaceBOObject.getCodigoMarketplace());
}
}
}
Another "trick" to deal with that is to move the body to the loop into a separate method having the "additional" try/catch block:
private MarketplaceBO loadFromSable(MerchantMarketplaceBO entity){
MarketplaceBO marketplaceBOObject = new MarketplaceBO(entity.getMarketplaceID());
try {
marketplaceBOObject.loadFromSable();
} catch (WhateverException e) {
// do something to make marketplaceBOObject a valid object
// or at least log the exception
}
return marketplaceBOObject;
}
But since we want to stick to the Same Layer of Abstraction principle we also need to move other part of that method to new smaller methods:
public void serveFromSableV2() {
String merchantCustomerID = ObfuscatedId.construct(request.getMerchantCustomerID()).getPublicEntityId();
try {
List<MerchantMarketplaceBO> merchantMarketplaceBOList =
getAllMerchantMarketplacesBOsByMerchant();
Vector<Marketplace> resultVector = new Vector<>();
for (MerchantMarketplaceBO entity : merchantMarketplaceBOList) {
MarketplaceBO marketplaceBOObject = loadFromSable(entity);
addToActiveMarketplacesList(marketplaceBOObject,resultVector);
}
verifyHavingActiveMarketPlaces(resultVector);
setResponseWithWrapped(resultVector);
} catch (EntityNotFoundException | SignatureMismatchException | InvalidIDException e) {
throw new InvalidIDException("merch=" + merchantCustomerID + "[" + request.getMerchantCustomerID() + "]"); //C++ stack throws InvalidIDException if marketplace is not found in datastore
}
}
You could refactor the load into a separate method that catches and returns the exception instead of throwing it:
private Optional<Exception> tryLoadFromSable(MarketplaceBO marketplaceBOObject) {
try {
marketplaceBOObject.loadFromSable();
return Optional.empty();
}
catch(Exception e) {
return Optional.of(e);
}
}
Then inside your loop:
//inside for loop...
MarketplaceBO marketplaceBOObject = new MarketplaceBO(entity.getMarketplaceID());
Optional<Exception> loadException = tryLoadFromSable(marketplaceBOObject);
if(loadException.isPresent()) {
//Do something here, log it, save it in a list for later processing, etc.
}
I'm still new to the ehcache API so I may be missing something obvious but here's my current issue.
I currently have a persistent-disk cache that's being stored on my server. I'm currently implementing a passive write-behind cache method that saves key/value pairs to a database table. In the event the persistent-disk cache is lost, I'd like to restore the cache from the database table.
Example I'm using for my write-behind logic:
http://scalejava.blogspot.com/2011/10/ehcache-write-behind-example.html
I'm building a disk persistent using the following method:
import com.googlecode.ehcache.annotations.Cacheable;
import com.googlecode.ehcache.annotations.KeyGenerator;
import com.googlecode.ehcache.annotations.PartialCacheKey;
#Cacheable(cacheName = "readRuleCache", keyGenerator=#KeyGenerator(name="StringCacheKeyGenerator"))
public Rule read(#PartialCacheKey Rule rule,String info) {
System.out.print("Cache miss: "+ rule.toString());
//code to manipulate Rule object using info
try{
String serialziedRule =objectSerializer.convertToString(Rule);
readRuleCache.putWithWriter(new Element(rule.toString(),serialziedRule ));
}
catch(IOException ioe)
{
System.out.println("error serializing rule object");
ioe.printStackTrace();
}
return rule;
}
The write method I'm overriding in my CacheWriter implementation works fine. Things are getting saved to the database.
#Override
public void write(final Element element) throws CacheException {
String insertKeyValuePair ="INSERT INTO RULE_CACHE (ID, VALUE) VALUES " +
"('"+element.getObjectKey().toString()+"','"
+element.getObjectValue().toString()+"')";
Statement statement;
try
{
statement = connection.createStatement();
statement.executeUpdate(insertKeyValuePair);
statement.close();
} catch (SQLException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
Querying and De-serializing the string back in to an object works fine too. I've validated that all the values of the object are present. The disk persistent cache is also being populated when I delete the *.data file and restart the application:
public void preLoadCache()
{
CacheManager cacheManager = CacheManager.getInstance();
readRuleCache = cacheManager.getCache("readRuleCache");
Query query=em.createNativeQuery("select * from RULE_CACHE");
#SuppressWarnings("unchecked")
List<Object[]> resultList = query.getResultList();
for(Object[] row:resultList)
{
try {
System.out.println("Deserializing: "+row[1].toString());
Rule rule = objectSerializer.convertToObject((String)row[1]);
rule= RuleValidator.verify(rule);
if(rule!=null)
{
readAirRuleCache.putIfAbsent(new Element(row[0], rule));
}
}
catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}
Question
Everything looks OK. However when I pass Rule objects with keys that should exist in the cache the "read" method is called regardless and the *.data file size is increased. Though the write method for the database doesn't attempt to insert existing keys again. Any ideas on what I'm doing wrong?
It turns out this was the culprit:
keyGenerator=#KeyGenerator(name="StringCacheKeyGenerator")
The source material I read on this suggested that the "toString()" method I overrode would be used as the key for the cache key/value pair. After further research it turns out that this is not true. Though the "toString()" key is used. It is nested within class information to create a much larger key.
Reference:
http://code.google.com/p/ehcache-spring-annotations/wiki/StringCacheKeyGenerator
Example Expected key:
"[49931]"
Example Actual Key:
"[class x.y.z.WeatherDaoImpl, getWeather class x.y.z.Weather, [class java.lang.String], [49931]]"
all my crystal report are publish on my business object server.
all of them are connected to Business Views Object.
all of these Business Views use the same dynamic Data Connection.
This make that my report have this Dynamic Data Connection Parameter.
I can change this parameter via the Central Management Console.
But now I would like to be able to change it via code with the BO's SDK.
I have this method that I think is near achieving what i want , I just can save the changes.
public static void updateParameter(IInfoObject report){
// get all parameters
try {
IReport rpt = (IReport) report;
int i = 0;
IReportParameter params;
for(i=0;i<rpt.getReportParameters().size();i++){
params = (IReportParameter) rpt.getReportParameters().get(i);
int y = 0;
for(y=0;y<params.getCurrentValues().getValues(IReportParameter.ReportVariableValueType.STRING).size();y++){
IParameterFieldDiscreteValue val = (IParameterFieldDiscreteValue) params.getCurrentValues().getValues(IReportParameter.ReportVariableValueType.STRING).getValue(y);
if(val.getDescription().contains("Data Connection")){
val.setValue(boConstance.conn_EXAMPLE1);
val.setDescription(boConstance.desc_EXAMPLE1);
//save the new parameter ?????
System.out.println("report parameters modified");
}
}
}
} catch (SDKException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
Any Idea ? Thanks,
Since you are already setting the parameters you should just need to call the save method on the IReport itself. You wouldn't save the parameters directly since they are data belonging to the report.
So to finish your example after the for loop
try {
IReport rpt = (IReport) report;
int i = 0;
IReportParameter params;
for(i=0;i<rpt.getReportParameters().size();i++){
// do for loop here setting the parameters
}
rpt.save();
} catch (SDKException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
This is a very simple example of hibernate usage in java: a function that when it's called, it creates a new object in the database. If everything goes fine, the changes are stored and visible immediately (no cache issues). If something fails, the database should be restored as if this function was never called.
public String createObject() {
PersistentTransaction t = null;
try {
t = PersistentManager.instance().getSession().beginTransaction();
Foods f = new Foods(); //Foods is an Hibernate object
//set some values on f
f.save();
t.commit();
PersistentManager.instance().getSession().clear();
return "everything allright";
} catch (Exception e) {
System.out.println("Error while creating object");
e.printStackTrace();
try {
t.rollback();
System.out.println("Database restored after the error.");
} catch (Exception e1) {
System.out.println("Error restoring database!");
e1.printStackTrace();
}
}
return "there was an error";
}
Is there any error? Would you change / improve anything?
I don't see anything wrong with your code here. As #Vinod has mentioned, we rely on frameworks like Spring to handle the tedious boiler plate code. After all, you don't want code like this to exist in every possible DAO method you have. They makes things difficult to read and debug.
One option is to use AOP where you apply AspectJ's "around" advice on your DAO method to handle the transaction. If you don't feel comfortable with AOP, then you can write your own boiler plate wrapper if you are not using frameworks like Spring.
Here's an example that I crafted up that might give you an idea:-
// think of this as an anonymous block of code you want to wrap with transaction
public abstract class CodeBlock {
public abstract void execute();
}
// wraps transaction around the CodeBlock
public class TransactionWrapper {
public boolean run(CodeBlock codeBlock) {
PersistentTransaction t = null;
boolean status = false;
try {
t = PersistentManager.instance().getSession().beginTransaction();
codeBlock.execute();
t.commit();
status = true;
}
catch (Exception e) {
e.printStackTrace();
try {
t.rollback();
}
catch (Exception ignored) {
}
}
finally {
// close session
}
return status;
}
}
Then, your actual DAO method will look like this:-
TransactionWrapper transactionWrapper = new TransactionWrapper();
public String createObject() {
boolean status = transactionWrapper.run(new CodeBlock() {
#Override
public void execute() {
Foods f = new Foods();
f.save();
}
});
return status ? "everything allright" : "there was an error";
}
The save will be through a session rather than on the object unless you have injected the session into persistent object.
Have a finally and do a session close also
finally {
//session.close()
}
Suggestion: If this code posted was for learning purpose then it is fine, otherwise I would suggest using Spring to manage this boiler plate stuff and worry only about save.