Running multiple url at app start blackberry - java

My requirement is to parse two urls at application start point, these two urls have data that is required to be displayed in my application. I am doing this by keeping two urls in a array and running a for loop in the background thread and then insert the values into database in background thread, is it correct way of approaching the problem?
I have posted my code below, help of any kind is welcomed :)
public StartConnecton(SplashScreen splashScreen)
{
urls = new String[2];
urls[0] = "http:xxxxxx.com";
urls[1] = "http:yyy.com";
_dbIRef = new ClassDatabase(1);
_dbIRef.setSID(46);
_splashScreen = (SplashScreen)splashScreen;
_classDatabase = new ClassDatabase();
}
public void run()
{
int size = urls.length;
for(int i = 0; i < size;i++)
{
if(i==0)
{
_id= 1;
}else if(i==1)
{
_id = 0;
}
try{
String conn = this.getConnectionString();
con = (HttpConnection)Connector.open(urls[i]+getConnectionString());
con.setRequestMethod(HttpConnection.GET);
con.setRequestProperty("User-Agent","Profile/MIDP-1.0 Confirguration/CLDC- 1.0");
System.out.println("CONNECTION!!!!!!!!!!!"+con);
code = con.getResponseCode();
System.out.println("CODE!!!!!!!!!!!"+code+"ID"+_id);
if ( code == HttpConnection.HTTP_OK)
{
is = con.openInputStream();
int length = (int) con.getLength();
new Parser(is,_id);
is.close();
con.close();
}
}catch(Exception e)
{
System.out.println("EXCEPTION!!!!!!!!!!"+e);
}
}
_classDatabase.delete("Delete from topnews where sid = 46");
_classDatabase.insertTopNews();
_classDatabase.insertTabBar();
_classDatabase.insertGalleryInfo();
_topNewsScreen = new TopNewsScreen("TopNews");
_splashScreen.swapScreen(_topNewsScreen);
}
Help of any kind is welcomed
A Y

The problems you have at the moment are:
1. The connections are instantiated sequentially.
If the first one fails (server not there, BlackBerry MDS servers down, etc) then you'll have to wait around 30 seconds for the connection.open request to timeout before the second connection is tried.
2. The UI will freeze during connection attempts. I'm guessing you're doing this on the event thread as well, which means the app will freeze whilst the Connection.open is running because this method blocks.
The solution to both of the above problems is to wrap each connection attempt into a separate Thread. Here's a nice example: http://mnarinsky.blogspot.com/2011/03/blackberry-sending-http-request-in.html
3. Redundant code What is that if(i==0) block of code doing? If all you're trying to do is make _id = 1 when i == 0, then just do _id = (i==0) ? 1 : 0;. Alternatively reverse the order which you put the URLs into your array and just use i, and remove the _id variable entirely.

Related

JnetPcap: reading from offline file very slow

I'm building a sort of custom version of wireshark with jnetpcap v1.4r1425. I just want to open offline pcap files and display them in my tableview, which works great except for the speed.
The files I open are around 100mb with 700k packages.
public ObservableList<Frame> readOfflineFiles1(int numFrames) {
ObservableList<Frame> frameData = FXCollections.observableArrayList();
if (numFrames == 0){
numFrames = Pcap.LOOP_INFINITE;
}
final StringBuilder errbuf = new StringBuilder();
final Pcap pcap = Pcap.openOffline(FileAddress, errbuf);
if (pcap == null) {
System.err.println(errbuf); // Error is stored in errbuf if any
return null;
}
JPacketHandler<StringBuilder> packetHandler = new JPacketHandler<StringBuilder>() {
public void nextPacket(JPacket packet, StringBuilder errbuf) {
if (packet.hasHeader(ip)){
sourceIpRaw = ip.source();
destinationIpRaw = ip.destination();
sourceIp = org.jnetpcap.packet.format.FormatUtils.ip(sourceIpRaw);
destinationIp = org.jnetpcap.packet.format.FormatUtils.ip(destinationIpRaw);
}
if (packet.hasHeader(tcp)){
protocol = tcp.getName();
length = tcp.size();
int payloadOffset = tcp.getOffset() + tcp.size();
int payloadLength = tcp.getPayloadLength();
buffer.peer(packet, payloadOffset, payloadLength); // No copies, by native reference
info = buffer.toHexdump();
} else if (packet.hasHeader(udp)){
protocol = udp.getName();
length = udp.size();
int payloadOffset = udp.getOffset() + udp.size();
int payloadLength = udp.getPayloadLength();
buffer.peer(packet, payloadOffset, payloadLength); // No copies, by native reference
info = buffer.toHexdump();
}
if (packet.hasHeader(payload)){
infoRaw = payload.getPayload();
length = payload.size();
}
frameData.add(new Frame(packet.getCaptureHeader().timestampInMillis(), sourceIp, destinationIp, protocol, length, info ));
//System.out.print(i+"\n");
//i=i+1;
}
};
pcap.loop(numFrames, packetHandler , errbuf);
pcap.close();
return frameData;
}
This code is very fast for the first maybe 400k packages, but after that it slows down a lot. It needs around 1 minute for the first 400k packages and around 10 minutes for the rest. What is the issue here?
It's not that the list is getting too timeconsuming to work with is it? the listmethod add is O(1), isnt it?
I asked about this on the official jnetpcap forums too but it's not very active.
edit:
turn out it slows down massively because of the heap usage. Is there a way to reduce this?
As the profiler showed you, you're running low on memory and it starts to slow down.
Either give more memory with -Xmx or don't load all the packets into memory at once.

How to exec tons of Insert queries read from file in Sqlite Android

Currently I have a requirement in which I need to read tons of insert queries approx 100 000 queries written in a .sql file. I need to read this file from SdCard and update the local sqlite database in my application.
I am using below code with successful update:
At the same time I am worried about the performance issue and risk of application hang when it will be like a task of heavier sql file. Do I need to change the approach what I am using right now? Is there any better implementation with Sqlite Database to perform this task with no risk.
/* Iterate through lines (assuming each insert has its own line
and theres no other stuff) */
while (insertReader.ready()) {
String insertStmt = insertReader.readLine();
db.execSQL(insertStmt);
bytesRead += insertStmt.length();
int percent = (int) (bytesRead * 100 / totalBytes);
// To show the updating data progress
if (previousPercent < percent) {
System.out.println(percent);
previousPercent = percent;
Bundle b = new Bundle();
b.putInt(AppConstants.KEY_PROGRESS_PERCENT, percent);
Message msg = new Message();
msg.setData(b);
errorViewhandler.sendMessage(msg);
}
result++;
}
insertReader.close();
I have found that the fastest way to import a lot of data into your database is to use applyBatch() on a ContentProvider.
Here is an example I reduced from one of my projects:
ArrayList<ContentProviderOperation> cpoList = new ArrayList<ContentProviderOperation>();
while (condition) {
ContentProviderOperation.Builder b = ContentProviderOperation.newInsert(YourOwnContentProvider.CONTENT_URI);
b.withYieldAllowed(true);
b.withValue(COLUMN_ONE, 1);
b.withValue(COLUMN_TWO, 2);
cpoList.add(b.build());
// You could check here, whether you have a lot of items in your list already. If you do, you should send them to applyBatch already.
}
if(cpoList.size() > 0) {
context.get().getContentResolver().applyBatch(YourOwnContentProvider.AUTHORITY, cpoList);
}
As an example implementation of applyBatch:
#Override
public ContentProviderResult[] applyBatch(ArrayList<ContentProviderOperation> operations) {
ContentProviderResult[] result = new ContentProviderResult[operations.size()];
int i = 0;
SQLiteDatabase sqlDB = db.getWritableDatabase();
sqlDB.beginTransaction();
try {
for (ContentProviderOperation operation : operations) {
result[i++] = operation.apply(this, result, i);
}
sqlDB.setTransactionSuccessful();
} catch (OperationApplicationException e) {
// Deal with exception
} finally {
sqlDB.endTransaction();
}
return result;
}

Search Box for Jpanel

I am in the middle of creating an app that allows users to apply for job positions and upload their CVs. I`m currently stuck on trying to make a search box for the admin to be able to search for Keywords. The app will than look through all the CVs and if it finds such keywords it will show up a list of Cvs that contain the keyword. I am fairly new to Gui design and app creation so not sure how to go about doing it. I wish to have it done via java and am using the Eclipse Window builder to help me design it. Any help will be greatly appreciated, hints, advice anything. Thank You.
Well, this not right design approach as real time search of words in all files of given folder will be slow and not sustainable in long run. Ideally you should have indexed all CV's for keywords. The search should run on index and then get the associated CV for that index ( think of indexes similar to tags). There are many options for indexing - simples DB indexing or using Apache Lucene or follow these steps to create a index using Maps and refer this index for search.
Create a map Map<String, List<File>> for keeping the association of
keywords to files
iterate through all files, and for each word in
each file, add that file to the list corresponding to that word in
your index map
here is the java code which will work for you but I would still suggest to change your design approach and use indexes.
File dir = new File("Folder for CV's");
if(dir.exists())
{
Pattern p = Pattern.compile("Java");
ArrayList<String> list = new ArrayList<String>(); // list of CV's
for(File f : dir.listFiles())
{
if(!f.isFile()) continue;
try
{
FileInputStream fis = new FileInputStream(f);
byte[] data = new byte[fis.available()];
fis.read(data);
String text = new String(data);
Matcher m = p.matcher(text);
if(m.find())
{
list.add(f.getName()); // add file to found-keyword list.
}
fis.close();
}
catch(Exception e)
{
System.out.print("\n\t Error processing file : "+f.getName());
}
}
System.out.print("\n\t List : "+list); // list of files containing keyword.
} // IF directory exists then only process.
else
{
System.out.print("\n Directory doesn't exist.");
}
Here you get the files list to show now for "Java". As I said use indexes :)
Thanks for taking your time to look into my problem.
I have actually come up with a solution of my own. It is probably very amateur like but it works for me.
JButton btnSearch = new JButton("Search");
btnSearch.addActionListener(new ActionListener()
{
public void actionPerformed(ActionEvent arg0)
{
list.clear();
String s = SearchBox.getText();
int i = 0,present = 0;
int id;
try
{
Class.forName(driver).newInstance();
Connection conn = DriverManager.getConnection(url+dbName,userName,password);
Statement st = conn.createStatement();
ResultSet res = st.executeQuery("SELECT * FROM javaapp.test");
while(res.next())
{
i = 0;
present = 0;
while(i < 9)
{
String out = res.getString(search[i]);
if(out.toLowerCase().contains(s.toLowerCase()))
{
present = 1;
break;
}
i++;
}
if(tglbtnNormalshortlist.isSelected())
{
if(present == 1 && res.getInt("Shortlist") == 1)
{
id = res.getInt("Candidate");
String print = res.getString("Name");
list.addElement(print+" "+id);
}
}
else
{
if(present == 1 && res.getInt("Shortlist") == 0)
{
id = res.getInt("Candidate");
String print = res.getString("Name");
list.addElement(print+" "+id);
}
}
}
}
catch (Exception e)
{
e.printStackTrace();
}
}
});

Java: Marshalling using JaxB to XML, how to properly multithread

I am trying to take a very long file of strings and convert it to an XML according to a schema I was given. I used jaxB to create classes from that schema. Since the file is very large I created a thread pool to improve the performance but since then it only processes one line of the file and marshalls it to the XML file, per thread.
Below is my home class where I read from the file. Each line is a record of a transaction, for every new user encountered a list is made to store all of that users transactions and each list is put into a HashMap. I made it a ConcurrentHashMap because multiple threads will work on the map simultaneously, is this the correct thing to do?
After the lists are created a thread is made for each user. Each thread runs the method ProcessCommands below and receives from home the list of transactions for its user.
public class home{
public static File XMLFile = new File("LogFile.xml");
Map<String,List<String>> UserMap= new ConcurrentHashMap<String,List<String>>();
String[] UserNames = new String[5000];
int numberOfUsers = 0;
try{
BufferedReader reader = new BufferedReader(new FileReader("test.txt"));
String line;
while ((line = reader.readLine()) != null)
{
parsed = line.split(",|\\s+");
if(!parsed[2].equals("./testLOG")){
if(Utilities.checkUserExists(parsed[2], UserNames) == false){ //User does not already exist
System.out.println("New User: " + parsed[2]);
UserMap.put(parsed[2],new ArrayList<String>()); //Create list of transactions for new user
UserMap.get(parsed[2]).add(line); //Add First Item to new list
UserNames[numberOfUsers] = parsed[2]; //Add new user
numberOfUsers++;
}
else{ //User Already Existed
UserMap.get(parsed[2]).add(line);
}
}
}
reader.close();
} catch (IOException x) {
System.err.println(x);
}
//get start time
long startTime = new Date().getTime();
tCount = numberOfUsers;
ExecutorService threadPool = Executors.newFixedThreadPool(tCount);
for(int i = 0; i < numberOfUsers; i++){
System.out.println("Starting Thread " + i + " for user " + UserNames[i]);
Runnable worker = new ProcessCommands(UserMap.get(UserNames[i]),UserNames[i], XMLfile);
threadPool.execute(worker);
}
threadPool.shutdown();
while(!threadPool.isTerminated()){
}
System.out.println("Finished all threads");
}
Here is the ProcessCommands class. The thread receives the list for its user and creates a marshaller. From what I unserstand marshalling is not thread safe so it is best to create one for each thread, is this the best way to do that?
When I create the marshallers I know that each from (from each thread) will want to access the created file causing conflicts, I used synchronized, is that correct?
As the thread iterates through it's list, each line calls for a certain case. There are a lot so I just made pseudo-cases for clarity. Each case calls the function below.
public class ProcessCommands implements Runnable{
private static final boolean DEBUG = false;
private List<String> list = null;
private String threadName;
private File XMLfile = null;
public Thread myThread;
public ProcessCommands(List<String> list, String threadName, File XMLfile){
this.list = list;
this.threadName = threadName;
this.XMLfile = XMLfile;
}
public void run(){
Date start = null;
int transactionNumber = 0;
String[] parsed = new String[8];
String[] quoteParsed = null;
String[] universalFormatCommand = new String[9];
String userCommand = null;
Connection connection = null;
Statement stmt = null;
Map<String, UserObject> usersMap = null;
Map<String, Stack<BLO>> buyMap = null;
Map<String, Stack<SLO>> sellMap = null;
Map<String, QLO> stockCodeMap = null;
Map<String, BTO> buyTriggerMap = null;
Map<String, STO> sellTriggerMap = null;
Map<String, USO> usersStocksMap = null;
String SQL = null;
int amountToAdd = 0;
int tempDollars = 0;
UserObject tempUO = null;
BLO tempBLO = null;
SLO tempSLO = null;
Stack<BLO> tempStBLO = null;
Stack<SLO> tempStSLO = null;
BTO tempBTO = null;
STO tempSTO = null;
USO tempUSO = null;
QLO tempQLO = null;
String stockCode = null;
String quoteResponse = null;
int usersDollars = 0;
int dollarAmountToBuy = 0;
int dollarAmountToSell = 0;
int numberOfSharesToBuy = 0;
int numberOfSharesToSell = 0;
int quoteStockInDollars = 0;
int shares = 0;
Iterator<String> itr = null;
int transactionCount = list.size();
System.out.println("Starting "+threadName+" - listSize = "+transactionCount);
//UO dollars, reserved
usersMap = new HashMap<String, UserObject>(3); //userName -> UO
//USO shares
usersStocksMap = new HashMap<String, USO>(); //userName+stockCode -> shares
//BLO code, timestamp, dollarAmountToBuy, stockPriceInDollars
buyMap = new HashMap<String, Stack<BLO>>(); //userName -> Stack<BLO>
//SLO code, timestamp, dollarAmountToSell, stockPriceInDollars
sellMap = new HashMap<String, Stack<SLO>>(); //userName -> Stack<SLO>
//BTO code, timestamp, dollarAmountToBuy, stockPriceInDollars
buyTriggerMap = new ConcurrentHashMap<String, BTO>(); //userName+stockCode -> BTO
//STO code, timestamp, dollarAmountToBuy, stockPriceInDollars
sellTriggerMap = new HashMap<String, STO>(); //userName+stockCode -> STO
//QLO timestamp, stockPriceInDollars
stockCodeMap = new HashMap<String, QLO>(); //stockCode -> QLO
//create user object and initialize stacks
usersMap.put(threadName, new UserObject(0, 0));
buyMap.put(threadName, new Stack<BLO>());
sellMap.put(threadName, new Stack<SLO>());
try {
//Marshaller marshaller = getMarshaller();
synchronized (this){
Marshaller marshaller = init.jc.createMarshaller();
marshaller.setProperty(Marshaller.JAXB_FORMATTED_OUTPUT, true);
marshaller.setProperty(Marshaller.JAXB_FRAGMENT, true);
marshaller.marshal(LogServer.Root,XMLfile);
marshaller.marshal(LogServer.Root,System.out);
}
} catch (JAXBException M) {
M.printStackTrace();
}
Date timing = new Date();
//universalFormatCommand = new String[8];
parsed = new String[8];
//iterate through workload file
itr = this.list.iterator();
while(itr.hasNext()){
userCommand = (String) itr.next();
itr.remove();
parsed = userCommand.split(",|\\s+");
transactionNumber = Integer.parseInt(parsed[0].replaceAll("\\[", "").replaceAll("\\]", ""));
universalFormatCommand = Utilities.FormatCommand(parsed, parsed[0]);
if(transactionNumber % 100 == 0){
System.out.println(this.threadName + " - " +transactionNumber+ " - "+(new Date().getTime() - timing.getTime())/1000);
}
/*System.out.print("UserCommand " +transactionNumber + ": ");
for(int i = 0;i<8;i++)System.out.print(universalFormatCommand[i]+ " ");
System.out.print("\n");*/
//switch for user command
switch (parsed[1].toLowerCase()) {
case "One"
*Do Stuff"
LogServer.create_Log(universalFormatCommand, transactionNumber, CommandType.ADD);
break;
case "Two"
*Do Stuff"
LogServer.create_Log(universalFormatCommand, transactionNumber, CommandType.ADD);
break;
}
}
}
The function create_Log has multiple cases so as before, for clarity I just left one. The case "QUOTE" only calls one object creation function but other other cases can create multiple objects. The type 'log' is a complex XML type that defines all the other object types so in each call to create_Log I create a log type called Root. The class 'log' generated by JaxB included a function to create a list of objects. The statement:
Root.getUserCommandOrQuoteServerOrAccountTransaction().add(quote_QuoteType);
takes the root element I created, creates a list and adds the newly created object 'quote_QuoteType' to that list. Before I added threading this method successfully created a list of as many objects as I wanted then marshalled them. So I'm pretty positive the bit in class 'LogServer' is not the issue. It is something to do with the marshalling and syncronization in the ProcessCommands class above.
public class LogServer{
public static log Root = new log();
public static QuoteServerType Log_Quote(String[] input, int TransactionNumber){
ObjectFactory factory = new ObjectFactory();
QuoteServerType quoteCall = factory.createQuoteServerType();
**Populate the QuoteServerType object called quoteCall**
return quoteCall;
}
public static void create_Log(String[] input, int TransactionNumber, CommandType Command){
System.out.print("TRANSACTION "+TransactionNumber + " is " + Command + ": ");
for(int i = 0; i<input.length;i++) System.out.print(input[i] + " ");
System.out.print("\n");
switch(input[1]){
case "QUOTE":
System.out.print("QUOTE CASE");
QuoteServerType quote_QuoteType = Log_Quote(input,TransactionNumber);
Root.getUserCommandOrQuoteServerOrAccountTransaction().add(quote_QuoteType);
break;
}
}
So you wrote a lot of code, but have you try if it is actually working? After quick look I doubt it. You should test your code logic part by part not going all the way till the end. It seems you are just staring with Java. I would recommend practice first on simple one threaded applications. Sorry if I sound harsh, but I will try to be constructive as well:
Per convention, the classes names are starts with capital letter, variables by small, you do it other way.
You should make a method in you home (Home) class not a put all your code in the static block.
You are reading the whole file to the memory, you do not process it line by line. After the Home is initialized literary whole content of file will be under UserMap variable. If the file is really large you will run out of the heap memory. If you assume large file than you cannot do it and you have to redisign your app to store somewhere partial results. If your file is smaller than memmory you could keep it like that (but you said it is large).
No need for UserNames, the UserMap.containsKey will do the job
Your thread pools size should be in the range of your cores not number of users as you will get thread trashing (if you have blocking operation in your code make tCount = 2*processors if not keep it as number of processors). Once one ProcessCommand finish, the executor will start another one till you finish all and you will be efficiently using all your processor cores.
DO NOT while(!threadPool.isTerminated()), this line will completely consume one processor as it will be constantly checking, call awaitTermination instead
Your ProcessCommand, has view map variables which will only had one entry cause as you said, each will process data from one user.
The synchronized(this) is Process will not work, as each thread will synchronized on different object (different isntance of process).
I believe creating marshaller is thread safe (check it) so no need to synchronization at all
You save your log (whatever it is) before you did actual processing in of the transactions lists
The marshalling will override content of the file with current state of LogServer.Root. If it is shared bettween your proccsCommand (seems so) what is the point in saving it in each thread. Do it once you are finished.
You dont need itr.remove();
The log class (for the ROOT variable !!!) needs to be thread-safe as all the threads will call the operations on it (so the list inside the log class must be concurrent list etc).
And so on.....
I would recommend, to
Start with simple one thread version that actually works.
Deal with processing line by line, (store reasults for each users in differnt file, you can have cache with transactions for recently used users so not to keep writing all the time to the disk (see guava cache)
Process multithreaded each user transaction to your user log objects (again if it is a lot you have to save them to the disk not keep all in memmory).
Write code that combines logs from diiffernt users to create one (again you may want to do it mutithreaded), though it will be mostly IO operations so not much gain and more tricky to do.
Good luck
override cont

Couchbase: net.spy.memcached.internal.CheckedOperationTimeoutException

I'm loading local Couchbase instance with application specific json objects.
Relevant code is:
CouchbaseClient getCouchbaseClient()
{
List<URI> uris = new LinkedList<URI>();
uris.add(URI.create("http://localhost:8091/pools"));
CouchbaseConnectionFactoryBuilder cfb = new CouchbaseConnectionFactoryBuilder();
cfb.setFailureMode(FailureMode.Retry);
cfb.setMaxReconnectDelay(1500); // to enqueue an operation
cfb.setOpTimeout(10000); // wait up to 10 seconds for an operation to succeed
cfb.setOpQueueMaxBlockTime(5000); // wait up to 5 seconds when trying to
// enqueue an operation
return new CouchbaseClient(cfb.buildCouchbaseConnection(uris, "my-app-bucket", ""));
}
Method to store entry (I'm using suggestions from Bulk Load and Exponential Backoff):
void continuosSet(CouchbaseClient cache, String key, int exp, Object value, int tries)
{
OperationFuture<Boolean> result = null;
OperationStatus status = null;
int backoffexp = 0;
do
{
if (backoffexp > tries)
{
throw new RuntimeException(MessageFormat.format("Could not perform a set after {0} tries.", tries));
}
result = cache.set(key, exp, value);
try
{
if (result.get())
{
break;
}
else
{
status = result.getStatus();
LOG.warn(MessageFormat.format("Set failed with status \"{0}\" ... retrying.", status.getMessage()));
if (backoffexp > 0)
{
double backoffMillis = Math.pow(2, backoffexp);
backoffMillis = Math.min(1000, backoffMillis); // 1 sec max
Thread.sleep((int) backoffMillis);
LOG.warn("Backing off, tries so far: " + tries);
}
backoffexp++;
}
}
catch (ExecutionException e)
{
LOG.error("ExecutionException while doing set: " + e.getMessage());
}
catch (InterruptedException e)
{
LOG.error("InterruptedException while doing set: " + e.getMessage());
}
}
while (status != null && status.getMessage() != null && status.getMessage().indexOf("Temporary failure") > -1);
}
When continuosSet method called for a large amount of objects to store (single thread) e.g.
CouchbaseClient cache = getCouchbaseClient();
do
{
SerializableData data = queue.poll();
if (data != null)
{
final String key = data.getClass().getSimpleName() + data.getId();
continuosSet(cache, key, 0, gson.toJson(data, data.getClass()), 100);
...
it generates CheckedOperationTimeoutException inside of continuosSet method in result.get() operation.
Caused by: net.spy.memcached.internal.CheckedOperationTimeoutException: Timed out waiting for operation - failing node: 127.0.0.1/127.0.0.1:11210
at net.spy.memcached.internal.OperationFuture.get(OperationFuture.java:160) ~[spymemcached-2.8.12.jar:2.8.12]
at net.spy.memcached.internal.OperationFuture.get(OperationFuture.java:133) ~[spymemcached-2.8.12.jar:2.8.12]
Can someone shed light into this how to overcome and recover from this situation? Is there a good technique/workaround on how to bulk load in Java client for Couchbase? I already explored documentation Performing a Bulk Set which is unfortunately for PHP Couchbase client.
My suspicion is that you may be running this in a JVM spawned from the command line that doesn't have that much memory. If that's the case, you could hit longer GC pauses which could cause the timeout you're mentioning.
I think the best thing to do is to try a couple of things. First, raise the -Xmx argument to the JVM to use more memory. See if the timeout happens later or goes away. If so, then my suspicion about memory is correct.
If that doesn't work, raise the setOpTimeout() and see if that reduces the error or makes it go away.
Also, make sure you're using the latest client.
By the way, I don't think this is directly bulk loading related. It may happen owing to a lot of resource consumption during bulk loading, but it looks like the regular backoff must be working or you're not ever hitting it.

Categories

Resources