I have been scratching my head on this for quite a while now. I have a large CSV file with over hundreds of billions of records.
I have a simple task at hand, to create JSON out of this CSV file and post it to a server. I want to make this task as quick as possible. So far my code to read CSV is as follows:
protected void readIdentityCsvDynamicFetch() {
String csvFile = pathOfIdentities;
CSVReader reader = null;
PayloadEngine payloadEngine = new PayloadEngine();
long counter = 0;
int size;
List<IdentityJmPojo> identityJmList = new ArrayList<IdentityJmPojo>();
try {
ExecutorService uploaderPoolService = Executors.newFixedThreadPool(3);
long lineCount = lineCount(pathOfIdentities);
logger.info("Line Count: " + lineCount);
reader = new CSVReader(new BufferedReader(new FileReader(csvFile)), ',', '\'', OFFSET);
String[] line;
long startTime = System.currentTimeMillis();
while ((line = reader.readNext()) != null) {
// logger.info("Lines"+line[0]+line[1]);
IdentityJmPojo identityJmPojo = new IdentityJmPojo();
identityJmPojo.setIdentity(line[0]);
identityJmPojo.setJM(line.length > 1 ? line[1] : (jsonValue/*!=null?"":jsonValue*/));
identityJmList.add(identityJmPojo);
size = identityJmList.size();
switch (size) {
case STEP:
counter = counter + STEP;
payloadEngine.prepareJson(identityJmList, uploaderPoolService,jsonKey);
identityJmList = new ArrayList<IdentityJmPojo>();
long stopTime = System.currentTimeMillis();
long elapsedTime = stopTime - startTime;
logger.info("=================== Time taken to read " + STEP + " records from CSV: " + elapsedTime + " and total records read: " + counter + "===================");
}
}
if (identityJmList.size() > 0) {
logger.info("=================== Executing Last Loop - Payload Size: " + identityJmList.size() + " ================= ");
payloadEngine.prepareJson(identityJmList, uploaderPoolService, jsonKey);
}
uploaderPoolService.shutdown();
} catch (Throwable e) {
e.printStackTrace();
logger.error("CsvReader || readIdentityCsvDynamicFetch method ", e);
} finally {
try {
if (reader != null)
reader.close();
} catch (IOException e) {
e.printStackTrace();
logger.error("CsvReader || readIdentityCsvDynamicFetch method ", e);
}
}
}
Now I use a ThreadPool executor service, in whose run() method I have an Apache Http client setup to post JSON to server. (I am using connection pooling and keep alive strategy and opening and closing the conn just once)
I create & post my JSON like below:
public void prepareJson(List<IdentityJmPojo> identities, ExecutorService notificationService, String key) {
try {
notificationService.submit(new SendPushNotification(prepareLowLevelJson(identities, key)));
// prepareLowLevelJson(identities, key);
} catch (Exception e) {
e.printStackTrace();
logger.error("PayloadEngine || readIdentityCsvDynamicFetch method ", e);
}
}
private ObjectNode prepareLowLevelJson(List<IdentityJmPojo> identities, String key) {
long startTime = System.currentTimeMillis();
ObjectNode mainJacksonObject = JsonNodeFactory.instance.objectNode();
ArrayNode dJacksonArray = JsonNodeFactory.instance.arrayNode();
for (IdentityJmPojo identityJmPojo : identities) {
ObjectNode dSingleObject = JsonNodeFactory.instance.objectNode();
ObjectNode dProfileInnerObject = JsonNodeFactory.instance.objectNode();
dSingleObject.put("identity", identityJmPojo.getIdentity());
dSingleObject.put("ts", ts);
dSingleObject.put("type", "profile");
//
dProfileInnerObject.put(key, identityJmPojo.getJM());
dSingleObject.set("profileData", dProfileInnerObject);
dJacksonArray.add(dSingleObject);
}
mainJacksonObject.set("d", dJacksonArray);
long stopTime = System.currentTimeMillis();
long elapsedTime = stopTime - startTime;
logger.info("===================Time to create JSON: " + elapsedTime + "===================");
return mainJacksonObject;
}
Now comes a strange part, When I comment out the notification sevice:
// notificationService.submit(new SendPushNotification(prepareLowLevelJson(identities, key)));
Everything works smooth, I can read the CSV and prepare JSON in under 29000 millis.
But when the actual task is to be performed, it fails and and I get a Out of memory error, I think there is a design flaw here. How can I handle this huge amount of data quickly, any tips will be greatly appreciated.
I think creating the Json object and array inside the for loop is also taking a lot of my memory, however I dont seem to find an alternative to this.
here is the stack trace:
java.lang.OutOfMemoryError: GC overhead limit exceeded
at java.util.LinkedHashMap.createEntry(LinkedHashMap.java:442)
at java.util.HashMap.addEntry(HashMap.java:884)
at java.util.LinkedHashMap.addEntry(LinkedHashMap.java:427)
at java.util.HashMap.put(HashMap.java:505)
at com.fasterxml.jackson.databind.node.ObjectNode._put(ObjectNode.java:861)
at com.fasterxml.jackson.databind.node.ObjectNode.put(ObjectNode.java:769)
at uploader.PayloadEngine.prepareLowLevelJson(PayloadEngine.java:50)
at uploader.PayloadEngine.prepareJson(PayloadEngine.java:24)
at uploader.CsvReader.readIdentityCsvDynamicFetch(CsvReader.java:97)
at uploader.Main.main(Main.java:30)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:147)
Related
I have two files assume its already sorted.
This is just example data, in real ill have around 30-40 Millions of records each file Size 7-10 GB file as row length is big, and fixed.
It's a simple text file, once searched record is found. ill do some update and write to file.
File A may contain 0 or more records of matching ID from File B
Motive is to complete this processing in least amount of time possible.
I am able to do but its time taking process...
Suggestions are welcome.
File A
1000000001,A
1000000002,B
1000000002,C
1000000002,D
1000000002,D
1000000003,E
1000000004,E
1000000004,E
1000000004,E
1000000004,E
1000000005,E
1000000006,A
1000000007,A
1000000008,B
1000000009,B
1000000010,C
1000000011,C
1000000012,C
File B
1000000002
1000000004
1000000006
1000000008
1000000010
1000000012
1000000014
1000000016
1000000018\
// Not working as of now. due to logic is wrong.
private static void readAndWriteFile() {
System.out.println("Read Write File Started.");
long time = System.currentTimeMillis();
try(
BufferedReader in = new BufferedReader(new FileReader(Commons.ROOT_PATH+"input.txt"));
BufferedReader search = new BufferedReader(new FileReader(Commons.ROOT_PATH+"search.txt"));
FileWriter myWriter = new FileWriter(Commons.ROOT_PATH+"output.txt");
) {
String inLine = in.readLine();
String searchLine = search.readLine();
boolean isLoopEnd = true;
while(isLoopEnd) {
if(searchLine == null || inLine == null) {
isLoopEnd = false;
break;
}
if(searchLine.substring(0, 10).equalsIgnoreCase(inLine.substring(0,10))) {
System.out.println("Record Found - " + inLine.substring(0, 10) + " | " + searchLine.substring(0, 10) );
myWriter.write(inLine + System.lineSeparator());
inLine = in.readLine();
}else {
inLine = in.readLine();
}
}
in.close();
myWriter.close();
search.close();
} catch (Exception e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
System.out.println("Read and Write to File done in - " + (System.currentTimeMillis() - time));
}
My suggestion would be to use a database. As said in this answer. Using txt files has a big disadvantage over DBs. Mostly because of the lack of indexes and the other points mentioned in the answer.
So what I would do, is create a Database (there are lots of good ones out there such as MySQL, PostgreSQL, etc). Create the tables that are needed, and read the file afterward. Insert each line of the file into the DB and use the db to search and update them.
Maybe this would not be an answer to your concrete question on
Motive is to complete this processing in the least amount of time possible.
But this would be a worthy suggestion. Good luck.
With this approach I am able to process 50M Records in 150 Second on i-3, 4GB Ram and SSD Hardrive.
private static void readAndWriteFile() {
System.out.println("Read Write File Started.");
long time = System.currentTimeMillis();
try(
BufferedReader in = new BufferedReader(new FileReader(Commons.ROOT_PATH+"input.txt"));
BufferedReader search = new BufferedReader(new FileReader(Commons.ROOT_PATH+"search.txt"));
FileWriter myWriter = new FileWriter(Commons.ROOT_PATH+"output.txt");
) {
String inLine = in.readLine();
String searchLine = search.readLine();
boolean isLoopEnd = true;
while(isLoopEnd) {
if(searchLine == null || inLine == null) {
isLoopEnd = false;
break;
}
// Since file is already sorted, i was looking for the //ans i found here..
long seachInt = Long.parseLong(searchLineSubString);
long inInt = Long.parseLong(inputLineSubString);
if(searchLine.substring(0, 10).equalsIgnoreCase(inLine.substring(0,10))) {
System.out.println("Record Found - " + inLine.substring(0, 10) + " | " + searchLine.substring(0, 10) );
myWriter.write(inLine + System.lineSeparator());
}
// Which pointer to move..
if(seachInt < inInt) {
searchLine = search.readLine();
}else {
inLine = in.readLine();
}
}
in.close();
myWriter.close();
search.close();
} catch (Exception e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
System.out.println("Read and Write to File done in - " + (System.currentTimeMillis() - time));
}
I have a function that searches for a list and for each item in the list I have another list. These two lists will have to be transformed into a text file to be imported into a software. The problem I'm having is slow ...
My first list has about 500 records, and every record in that list has another list that can range from 1 record to infinity. This takes about 20 minutes to complete. When I comment the updates (updates the status of each item of the 2 list) the time drops to 9 minutes.
Can someone help me ?
private String getExportacaoApontamento(
String idsExportar,
String dataInicial,
String dataFinal,
String status,
String tipoFiltro,
String filtro,
String turma,
boolean exportarTodos) throws SQLException {
StringBuilder stringBuilder = new StringBuilder();
try (
Connection conMySql = U_Conexao.getConexaoMySQL();
Connection conOracle = U_Conexao.getConexaoOracle()) {
if (conMySql != null) {
List<C_Sequencia> listSequencia;
R_Sequencia r_Sequencia = new R_Sequencia(conMySql, null);
if (status.equals(String.valueOf(C_Status.TODOS))) {
status = C_Status.PENDENTE + ", " + C_Status.EXPORTADO;
}
String orderBy = "S." + C_Sequencia.DATA + ", S." + C_Sequencia.COD_COLETOR + ", S." + C_Sequencia.COD_FISCAL;
if (exportarTodos == false) {
listSequencia = r_Sequencia.listExportarId(idsExportar, dataInicial, dataFinal, orderBy);
} else {
tipoFiltro = verificarFiltroApontamento(tipoFiltro);
if (filtro == null || filtro.isEmpty()) {
listSequencia = r_Sequencia.listExportarData(dataInicial, dataFinal, status, turma, orderBy);
} else {
listSequencia = r_Sequencia.listExportarFiltro(dataInicial, dataFinal, tipoFiltro, filtro, status, turma, orderBy);
}
}
if (!listSequencia.isEmpty()) {
if (!verificarLiberacoes(listSequencia, conMySql, conOracle)) {
return "-1";
}
C_Sequencia seqAntiga = null;
R_Producao r_Producao = new R_Producao(conMySql, conOracle);
for (C_Sequencia sequencia : listSequencia) {
C_Sequencia seqNova = sequencia;
String retornoSequencia = gerarSequencia(seqAntiga, seqNova);
if (!retornoSequencia.isEmpty()) {
stringBuilder.append(retornoSequencia);
}
List<C_Producao> listProducao = r_Producao.getProducaoExportar(sequencia.getChave(), status);
for (C_Producao producao : listProducao) {
DecimalFormat decimal = new DecimalFormat("########.00");
String prod = String.valueOf(decimal.format(Double.parseDouble(producao.getProducao())));
String meta = String.valueOf(decimal.format(Double.parseDouble(producao.getMeta())));
prod = prod.replace(",", "");
meta = meta.replace(",", "");
stringBuilder.append("02;")
.append(String.format("%5d", producao.getCodColetor())).append(";")
.append(U_DataHora.formatarData(producao.getDataHora(), U_DataHora.DDMMYYYY_HHMMSS, U_DataHora.DDMMYYYY)).append(";")
.append(String.format("%10d", producao.getFuncionario().getMatricula())).append(";")
.append(String.format("%9d", sequencia.getCodCc())).append(";")
.append(String.format("%4d", sequencia.getCodOp())).append(";")
.append(String.format("%4d", sequencia.getCodOp())).append(";")
.append(" ;")
.append(String.format("%6d", Long.parseLong(sequencia.getCodFazenda()))).append(";")
.append(String.format("%6d", Long.parseLong(sequencia.getCodTalhao()))).append(";")
.append(String.format("%10s", prod)).append(";")
.append(" 0000000;")
.append(";")
.append(" ;")
.append(String.format("%9s", meta)).append(";")
.append(String.format("%4d", sequencia.getCodSequencia())).append(";")
.append(" ;")
.append(" ;")
.append(" ;")
.append(" ;")
.append(" ;")
.append(" ;")
.append(String.format("%9d", sequencia.getCodOs())).append(";")
.append("\r\n");
if (producao.getStatus().getStatus() != C_Status.EXPORTADO) {
r_Producao.atualizarStatus(producao.getChave(), C_Status.EXPORTADO); // Atualiza status para exportado
}
}
// Atualiza o status de todas as produções da sequencia da posição atual
//r_Producao.atualizaStatusProducoesPorChaveSequencia(sequencia.getChave(), C_Status.EXPORTADO);
seqAntiga = seqNova;
if (sequencia.getStatus().getStatus() != C_Status.EXPORTADO) {
r_Sequencia.atualizarStatus(sequencia.getChave(), C_Status.EXPORTADO); // Atualiza status para exportado
}
}
}
}
} catch (Exception e) {
U_Log.erro(TAG, e.toString());
return e.toString();
}
return stringBuilder.toString();
}
private String gerarSequencia(C_Sequencia seqAntiga, C_Sequencia seqNova) {
StringBuilder sequenciaBuilder = new StringBuilder();
String texto = sequenciaBuilder.append("01;")
.append(String.format("%5d", seqNova.getCodColetor())).append(";")
.append(U_DataHora.formatarData(seqNova.getData(), U_DataHora.DDMMYYYY_HHMMSS, U_DataHora.DDMMYYYY)).append(";")
.append(String.format("%10d", seqNova.getCodFiscal())).append(";")
.append(String.format("%10d", seqNova.getCodFiscal())).append(";")
.append("\r\n").toString();
if (seqAntiga != null) {
if (!seqAntiga.getData().equals(seqNova.getData())
|| !Objects.equals(seqAntiga.getCodColetor(), seqNova.getCodColetor())
|| !Objects.equals(seqAntiga.getCodFiscal(), seqNova.getCodFiscal())) {
return texto;
} else {
return "";
}
} else {
return texto;
}
}
Result here
Here i commented update in the DB
for (C_Sequencia sequencia : listSequencia) {
...
for (C_Producao producao : listProducao) {
...
// update status in DB
if (producao.getStatus().getStatus() != C_Status.EXPORTADO) {
r_Producao.atualizarStatus(producao.getChave(), C_Status.EXPORTADO);
}
}
// update status in DB
if (sequencia.getStatus().getStatus() != C_Status.EXPORTADO) {
r_Sequencia.atualizarStatus(sequencia.getChave(), C_Status.EXPORTADO);
}
}
Result: Total time 9 minutes
Now, i commented the part that I built string
for (C_Sequencia sequencia : listSequencia) {
...
// here build a part of the string in method " gerarSequencia "
// i commented a part inside method " gerarSequencia "
String retornoSequencia = gerarSequencia(seqAntiga, seqNova);
...
for (C_Producao producao : listProducao) {
...
// Here i commented another part of the string
...
}
}
Result: Total time is 14 minutes -
Practically running only the updates lines
i solved my problem. I've changed logic. The actual problem was doing many SELECT in the database and this was making it extremely slow. Now is a single SELECT. The total time is now 6 seconds.
You have 2 problems:
1 - you are actually not doing the write to the file, but returning some big String. I don't think processing such large lists in memory is good idea.
2 - How are you doing the update? I guess the problem is that you are hitting db every row. You should think how to do it at once.
private File writeRecords(String filenaam, List<String> records) throws IOException
{
File file = new File(filenaam);
try (FileOutputStream fos = new FileOutputStream(file);
PrintWriter writer = new PrintWriter(fos))
{
records.forEach(rec -> writer.println(rec));
writer.flush();
}
return file;
}
And if you want to compress the file:
private void compressFiles(String zipfile, List<File> files) throws IOException
{
try (FileOutputStream fos = new FileOutputStream(zipfile);
ZipOutputStream zos = new ZipOutputStream(new BufferedOutputStream(fos)))
{
for (File file : files)
{
ZipEntry entry = new ZipEntry(file.getName());
byte[] bytes = Files.readAllBytes(Paths.get(file.getAbsolutePath()));
zos.putNextEntry(entry);
zos.write(bytes, 0, bytes.length);
zos.closeEntry();
}
zos.finish();
zos.flush();
}
}
This link may helpful for your performance issue,
this may be not file writing problem but as far as I believe the issue is String manipulation problem.
What's the fastest way to concatenate two Strings in Java?
Having trouble reading data more than once from the current input stream. The server / service it is connecting to uses libevent for event driven read and writes. However the writeEvent is never received after the initial packet has been received with the below code snippet:
new Thread(new Runnable() {
#Override
public void run() {
try {
do {
Log.d("Socket", "Entering a new read" + socketInputStream.available());
// each packet begins with a packetID, UInt32
int newPacketID = socketInputStream.readInt();
newPacketID = Integer.reverseBytes(newPacketID); // to little endian
int packetLength = socketInputStream.readInt();
packetLength = Integer.reverseBytes(packetLength);
byte[] payload = new byte[packetLength];
socketInputStream.readFully(payload);
Log.d("Socket", "Read: " + newPacketID);
Log.d("Socket", "Length: " + packetLength);
Log.d("Socket", "Payload: " + payload.toString());
payload = null;
//socketOutputStream.write(0);
//socketOutputStream.flush();
//socketInputStream = new DataInputStream(socket.getInputStream());
} while( isConnected == true );
Log.d("Socket", "Got away from the loop");
} catch(Exception exc) {
Log.d("Socket", "Reading exception: " + exc.getMessage());
}
}
}).start();
Uncommenting the single 0 byte + flush from the outputStream does mark the socket for writing again, but I'm wondering how I can achieve the same result without such a hacky method. Why does Java not allow this socket to be read from again?
Is it because the Thread is blocking the socketInputStream from being used anywhere else (thus allowing the socket to mark itself to be available for writing again)?
In more detail, the server has a sendBuffer it tries to empty every time the socket is marked for writing. It will write everything it can and then wait for a new writeevent, check if data is available and start sending if that is the case. If there is no writable socket currently available, the server fills the sendBuffer till such time a new write event can empty it.
func checkForData() {
guard canWrite == true else {
socketWriteEvent.add() // Make sure we know when we can write again!
return
}
guard sendBuffer.count > 0 else { return }
canWrite = false
//let maxChunkSize = 20240 > sendBuffer.count ? sendBuffer.count : 20240
var bytesWritten = 0
var totalWritten = 0
repeat {
let chunk = Array(sendBuffer[totalWritten..<sendBuffer.count])
print("Write being executed")
bytesWritten = tls.write(chunk, count: chunk.count)
totalWritten += bytesWritten
print("Still in the write loop")
} while( bytesWritten > 0 && (totalWritten < sendBuffer.count) )
if totalWritten > 0 {
sendBuffer.removeFirst(totalWritten)
}
if bytesWritten < 0 {
let error = tls.context.contextError()
if error.isEmpty == false {
print("[TLS] Error: \(error)")
}
}
print("Write completed");
socketWriteEvent.add()
}
From this topic there are two ways to trigger solr optimize from Java code. Either sending an http request, or using solrj api.
But how to check the progress of it?
Say, an api which returns the progress of optimize in percentage
or strings like RUNNING/COMPLETED/FAILED.
Is there such an api?
Yes, optimize() in solrj api is a sync method. Here is what I used to monitor the optimization progress.
CloudSolrClient client = null;
try {
client = new CloudSolrClient(zkClientUrl);
client.setDefaultCollection(collectionName);
m_logger.info("Explicit optimize of collection " + collectionName);
long optimizeStart = System.currentTimeMillis();
UpdateResponse optimizeResponse = client.optimize();
for (Object object : optimizeResponse.getResponse()) {
m_logger.info("Solr optimizeResponse" + object.toString());
}
if (optimizeResponse != null) {
m_logger.info(String.format(
" Elapsed Time(in ms) - %d, QTime (in ms) - %d",
optimizeResponse.getElapsedTime(),
optimizeResponse.getQTime()));
}
m_logger.info(String.format(
"Time spent on Optimizing a collection %s :"
+ (System.currentTimeMillis() - optimizeStart)
/ 1000 + " seconds", collectionName));
} catch (Exception e) {
m_logger.error("Failed during explicit optimize on collection "
+ collectionName, e);
} finally {
if (client != null) {
try {
client.close();
} catch (IOException e) {
throw new RuntimeException(
"Failed to close CloudSolrClient connection.", e);
}
client = null;
}
}
There are a lot of concurrent mod exception questions, but I'm unable to find an answer that has helped me resolve my issue. If you find an answer that does, please supply a link instead of just down voting.
So I originally got a concurrent mod error when attempting to search through an arraylist and remove elements. For a while, I had it resolved by creating a second arraylist, adding the discovered elements to it, then using removeAll() outside the for loop. This seemed to work, but as I used the for loop to import data from multiple files I started getting concurrent modification exceptions again, but intermittently for some reason. Any help would be greatly appreciated.
Here's the specific method having the problem (as well as the other methods it calls...):
public static void removeData(ServiceRequest r) {
readData();
ArrayList<ServiceRequest> targets = new ArrayList<ServiceRequest>();
for (ServiceRequest s : serviceQueue) {
//ConcurrentModification Exception triggered on previous line
if (
s.getClient().getSms() == r.getClient().getSms() &&
s.getTech().getName().equals(r.getTech().getName()) &&
s.getDate().equals(r.getDate())) {
JOptionPane.showMessageDialog(null, s.getClient().getSms() + "'s Service Request with " + s.getTech().getName() + " on " + s.getDate().toString() + " has been removed!");
targets.add(s);
System.out.print("targetted"); }
}
if (targets.isEmpty()) { System.out.print("*"); }
else {
System.out.print("removed");
serviceQueue.removeAll(targets);
writeData(); }
}
public static void addData(ServiceRequest r) {
readData();
removeData(r);
if (r.getClient().getStatus().equals("MEMBER") || r.getClient().getStatus().equals("ALISTER")) {
serviceQueue.add(r); }
else if (r.getClient().getStatus().equals("BANNED") || r.getClient().getStatus().equals("UNKNOWN")) {
JOptionPane.showMessageDialog(null, "New Request failed: " + r.getClient().getSms() + " is " + r.getClient().getStatus() + "!", "ERROR: " + r.getClient().getSms(), JOptionPane.WARNING_MESSAGE);
}
else {
int response = JOptionPane.showConfirmDialog(null, r.getClient().getSms() + " is " + r.getClient().getStatus() + "...", "Manually Overide?", JOptionPane.OK_CANCEL_OPTION);
if (response == JOptionPane.OK_OPTION) {
serviceQueue.add(r); }
}
writeData(); }
public static void readData() {
try {
Boolean complete = false;
FileReader reader = new FileReader(f);
ObjectInputStream in = xstream.createObjectInputStream(reader);
serviceQueue.clear();
while(complete != true) {
ServiceRequest test = (ServiceRequest)in.readObject();
if(test != null && test.getDate().isAfter(LocalDate.now().minusDays(180))) {
serviceQueue.add(test); }
else { complete = true; }
}
in.close(); }
catch (IOException | ClassNotFoundException e) { e.printStackTrace(); }
}
public static void writeData() {
if(serviceQueue.isEmpty()) { serviceQueue.add(new ServiceRequest()); }
try {
FileWriter writer = new FileWriter(f);
ObjectOutputStream out = xstream.createObjectOutputStream(writer);
for(ServiceRequest r : serviceQueue) { out.writeObject(r); }
out.writeObject(null);
out.close(); }
catch (IOException e) { e.printStackTrace(); }
}
EDIT
The changes cause the concurrent mod to trigger every time rather than intermittently, which I guess means the removal code is better but the error now triggers at it.remove();
public static void removeData(ServiceRequest r) {
readData();
for(Iterator<ServiceRequest> it = serviceQueue.iterator(); it.hasNext();) {
ServiceRequest s = it.next();
if (
s.getClient().getSms() == r.getClient().getSms() &&
s.getTech().getName().equals(r.getTech().getName()) &&
s.getDate().equals(r.getDate())) {
JOptionPane.showMessageDialog(null, s.getClient().getSms() + "'s Service Request with " + s.getTech().getName() + " on " + s.getDate().toString() + " has been removed!");
it.remove(); //Triggers here (line 195)
System.out.print("targetted"); }
}
writeData(); }
Exception in thread "AWT-EventQueue-0" java.util.ConcurrentModificatio
nException
at java.util.ArrayList$Itr.checkForComodification(ArrayList.java:901)
at java.util.ArrayList$Itr.next(ArrayList.java:851)
at data.ServiceRequest.removeData(ServiceRequest.java:195)
at data.ServiceRequest.addData(ServiceRequest.java:209) <...>
EDIT
After some more searching, I've switch the for loop to:
Iterator<ServiceRequest> it = serviceQueue.iterator();
while(it.hasNext()) {
and it's back to intermittently triggering. By that I mean the first time I attempt to import data (the removeData method is being triggered from the addData method) it triggers the concurrent mod exception, but the next try it pushes past the failure and moves on to another file. I know there's a lot of these concurrent mod questions, but I'm not finding anything that helps in my situation so links to other answers are more than welcome...
This is not how to do it, to remove elements while going through a List you use an iterator. Like that :
List<ServiceRequest> targets = new ArrayList<ServiceRequest>();
for(Iterator<ServiceRequest> it = targets.iterator(); it.hasNext();) {
ServiceRequest currentServReq = it.next();
if(someCondition) {
it.remove();
}
}
And you will not get ConcurrentModificationException this way if you only have one thread.
If there is multiple threads involved in your code, you may still get ConcurrentModificationException. One way to solve this, is to use Collections.synchronizedCollection(...) on your collection (serviceQueue) and as a result you will get a synchronized collection that will not produce ConcurrentModificationException. But, you code may become very slow.