I have a huge CSV file with over 700K + lines. I have to parse lines of that CSV file and do operations. I thought of doing it by using threading. What I attempt to do at the first is simple. Every thread should process unique lines of the CSV file. I have a limited number of lines to read to 3000 only. I create three threads. Each thread should read a line of the CSV file. Following is the code:
import java.io.*;
class CSVOps implements Runnable
{
static int lineCount = 1;
static int limit = 3000;
BufferedReader CSVBufferedReader;
public CSVOps(){} // Default constructor
public CSVOps(BufferedReader br){
this.CSVBufferedReader = br;
}
private synchronized void readCSV(){
System.out.println("Current thread "+Thread.currentThread().getName());
String line;
try {
while((line = CSVBufferedReader.readLine()) != null){
System.out.println(line);
lineCount ++;
if(lineCount >= limit){
break;
}
}
}
catch (IOException e) {
e.printStackTrace();
}
}
public void run() {
readCSV();
}
}
class CSVResourceHandler
{
String CSVPath;
public CSVResourceHandler(){ }// default constructor
public CSVResourceHandler(String path){
File f = new File(path);
if(f.exists()){
CSVPath = path;
}
else{
System.out.println("Wrong file path! You gave: "+path);
}
}
public BufferedReader getCSVFileHandler(){
BufferedReader br = null;
try{
FileReader is = new FileReader(CSVPath);
br = new BufferedReader(is);
}
catch(Exception e){
}
return br;
}
}
public class invalidRefererCheck
{
public static void main(String [] args) throws InterruptedException
{
String pathToCSV = "/home/shantanu/DEV_DOCS/Contextual_Work/invalid_domain_kw_site_wise_click_rev2.csv";
CSVResourceHandler csvResHandler = new CSVResourceHandler(pathToCSV);
CSVOps ops = new CSVOps(csvResHandler.getCSVFileHandler());
Thread t1 = new Thread(ops);
t1.setName("T1");
Thread t2 = new Thread(ops);
t1.setName("T2");
Thread t3 = new Thread(ops);
t1.setName("T3");
t1.start();
t2.start();
t3.start();
}
}
Class CSVResourceHandler simple finds if the passed file exists and then creates a BufferedReader and gives it. This reader is passed to the CSVOps class. It has a method, readCSV, which reads a single line of the CSV file and prints it. There is a limit set to 3000.
Now for threads to not mess up with count, I declare those limit and count variable both as static. When I run this program I get weird output. I get only about 1000 records, and sometimes I get 1500. They are in random order. At the end of output I get two lines of the CSV file and the current thread name comes out to be main!!
I am very much a novice with threads. I want reading this CSV file to become fast. What can it be done?
Ok, first, do not use multiple threads to do parallel I/O from a single mechanical disk. It actually slows down performance because the mechanical head needs to seek the next reading location every time a thread gets a chance to run. Thus you are unnecessarily bouncing the disk's head around which is a costly operation.
Use a single producer multiple consumer model to read lines using a single thread and process them using a pool of workers.
On to your problem:
Shouldn't you actually be waiting for the threads to finish before exiting main?
public class invalidRefererCheck
{
public static void main(String [] args) throws InterruptedException
{
...
t1.start();
t2.start();
t3.start();
t1.join();
t2.join();
t3.join();
}
}
I suggest reading the file in big chunks. Allocate a big buffer object, read a chunk, parse back from the end to find the last EOL char, copy the last bit of the buffer into a temp string, shove a null into the buffer at the EOL+1, queue off the buffer reference, immediately create a new one, copy in the temp string first, then fill up the rest of the buffer and repeat until EOF. Repeat until done. Use a pool of threads to parse/process the buffers.
You have to queue up whole chunks of valid lines. Queueing off single lines will result in the thread comms taking longer than the parsing.
Note that this, and similar, will probably result in the chunks being processed 'out-of-order' by the threads in the pool. If order must be preserved, (for example, the input file is sorted and the output is going into another file which must remain sorted), you can have the chunk-assembler thread insert a sequence-number in each chunk object. The pool threads can then pass processed buffers to yet another thread, (or task), that keeps a list of out-of-order chunks until all previous chunks have come in.
Multithreading does not have to be difficult/dangerous/ineffective. If you use queues/pools/tasks, avoid synchronize/join, don't continually create/terminate/destroy threads and only queue around large buffer objects that only one thread ever gets to work on at a time. You should see a good speedup with next-to-no possibility of deadlocks, false-sharing, etc.
The next step in such a speedup would be to pre-allocate a pool queue of buffers to eliminate continual creation/deletion of the buffers and associated GC and with a (L1 cache size) 'dead-zone' at the start of every buffer to eliminate cache-sharing completely.
That would go plenty quick on a multicore box, (esp. with an SSD!).
Oh, Java, right. I apologise for the 'CplusPlus-iness' of my answer with the null terminator. The rest of the points are OK, though. This should be a language-agnostic answer:)
Related
I am implementing a class that should receive a large text file. I want to split it in chunks and each chunk to be hold by a different thread that will count the frequency of each character in this chunk. I expect with starting more threads to get better performance but it turns out performance is getting poorer. Here`s my code:
public class Main {
public static void main(String[] args)
throws IOException, InterruptedException, ExecutionException, ParseException
{
// save the current run's start time
long startTime = System.currentTimeMillis();
// create options
Options options = new Options();
options.addOption("t", true, "number of threads to be start");
// variables to hold options
int numberOfThreads = 1;
// parse options
CommandLineParser parser = new DefaultParser();
CommandLine cmd;
cmd = parser.parse(options, args);
String threadsNumber = cmd.getOptionValue("t");
numberOfThreads = Integer.parseInt(threadsNumber);
// read file
RandomAccessFile raf = new RandomAccessFile(args[0], "r");
MappedByteBuffer mbb
= raf.getChannel().map(FileChannel.MapMode.READ_ONLY, 0, raf.length());
ExecutorService pool = Executors.newFixedThreadPool(numberOfThreads);
Set<Future<int[]>> set = new HashSet<Future<int[]>>();
long chunkSize = raf.length() / numberOfThreads;
byte[] buffer = new byte[(int) chunkSize];
while(mbb.hasRemaining())
{
int remaining = buffer.length;
if(mbb.remaining() < remaining)
{
remaining = mbb.remaining();
}
mbb.get(buffer, 0, remaining);
String content = new String(buffer, "ISO-8859-1");
#SuppressWarnings("unchecked")
Callable<int[]> callable = new FrequenciesCounter(content);
Future<int[]> future = pool.submit(callable);
set.add(future);
}
raf.close();
// let`s assume we will use extended ASCII characters only
int alphabet = 256;
// hold how many times each character is contained in the input file
int[] frequencies = new int[alphabet];
// sum the frequencies from each thread
for(Future<int[]> future: set)
{
for(int i = 0; i < alphabet; i++)
{
frequencies[i] += future.get()[i];
}
}
}
}
//help class for multithreaded frequencies` counting
class FrequenciesCounter implements Callable
{
private int[] frequencies = new int[256];
private char[] content;
public FrequenciesCounter(String input)
{
content = input.toCharArray();
}
public int[] call()
{
System.out.println("Thread " + Thread.currentThread().getName() + "start");
for(int i = 0; i < content.length; i++)
{
frequencies[(int)content[i]]++;
}
System.out.println("Thread " + Thread.currentThread().getName() + "finished");
return frequencies;
}
}
As suggested in comments, you will (usually) do not get better performance when reading from multiple threads. Rather you should process the chunks you have read on multiple threads. Usually processing does some blocking, I/O operations (saving to another file? saving to database? HTTP call?) and your performance will get better if you process on multiple threads.
For processing, you may have ExecutorService (with sensible number of threads). use java.util.concurrent.Executors to obtain instance of java.util.concurrent.ExecutorService
Having ExecutorService instance, you may submit your chunks for processing. Submitting chunks would not block. ExecutorService will start to process each chunk at separate thread (details depends of configuration of ExecutorService ). You may submit instances of Runnable or Callable.
Finally, after you submit all items you should call awaitTermination at your ExecutorService. It will wait until processing of all submited items is finished. After awaitTermination returns you should call shutdownNow() to abort processing (otherwise it may hang indefinitely, procesing some rogue task).
Your program is almost certainly limited by the speed of reading from disk. Using multiple threads does not help with this since the limit is a hardware limit on how fast the information can be transferred from disk.
In addition, the use of both RandomAccessFile and a subsequent buffer likely results in a small slowdown, since you are moving the data in memory after reading it in but before processing, rather than just processing it in place. You would be better off not using an intermediate buffer.
You might get a slight speedup by reading from the file directly into the final buffers and dispatching those buffers to be processed by threads as they are filled, rather than waiting for the entire file to be read before processing. However, most of the time would still be used by the disk read, so any speedup would likely be minimal.
I have read few answers regarding reading a file using multithreading and also found that its efficiency is very poor but still for the sake of learning I am trying to read a file using multithreading i.e for a large file few records should be read by one thread and other by another one.
import java.io.File;
import java.io.FileNotFoundException;
import java.util.ArrayList;
import java.util.Scanner;
public class QueueThread implements Runnable {
int count=0;
private int start;
private int end;
public QueueThread(int start,int end) {
this.start=start;
this.end=end;
}
public void run() {
for(int i=start;i<end;i++) {
try {
Scanner read = new Scanner (new File("userinfo.txt"));
read.useDelimiter(",|\n");
String mobile,recharge;
while(read.hasNext())
{
mobile = read.next();
recharge =read.next();
ArrayList<String> words = new ArrayList<String>();
words.add(mobile+recharge);
count++;
System.out.println("mobile no.:"+ mobile);
System.out.println("recharge amount:"+ recharge);
System.out.println("count:"+ count );
}
read.close();
} catch (FileNotFoundException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}
}
Control.java:
public class Control {
public static void main(String args[]) throws InterruptedException
{
QueueThread r1=new QueueThread(0,15);
QueueThread r2=new QueueThread(15,30);
Thread t1 =new Thread(r1);
Thread t2 =new Thread(r2);
t1.start();
t2.start();
t1.join();
t2.join();
}
}
Here I am reading a file userinfo.txt with some random 10 digits no. and some number. Each thread reads the whole file rather than just reading the first 15 entries in one thread and other 14 entries in another thread which I believe defeats my motto of reading file in parallel.
I am also trying to store the extracted data in ArrayList for performing further operations on it.
userinfo.txt
9844794101,43
9844749102,54
9844741903,55
9844741094,33
9844741095,87
9844741068,32
9844974107,53
8848897101,343
8848891702,345
8848891063,34
8848849104,64
I am really need some way out to read the file simultaneously in different thread
current output
mobile no.:9844794101
recharge amount:43
mobile no.:9844794101
count:1
recharge amount:43
count:1
mobile no.:9844749102
recharge amount:54
mobile no.:9844749102
recharge amount:54
count:2
count:2
And so on
If it's for the sake of learning, then just use a single Scanner object by your two threads. Since you need to read a pair of adjacent words and then join them, you'll have to think of some solution how to make your two threads work.
The simplest way is to allow every thread read a couple of words using 'synchronized(scannerObject) {...}. Of course, the performance would be worse than a single-threaded solution. Other solutions might avoid synchronization, e.g. if you use AtomicInteger as a counter and store the words in ConcurrentSkipListMap from a counter to the next word.
I think the classic approach is to know a precise point in a middle of a file from which you could read a new word. Then your first thread would read a file from start to the 'middle' position, and the second thread could read from the 'middle' position to the end. See e.g. Make BufferedReader start from the middle of a .txt file rather than the beginning?
Instead of
Scanner read = new Scanner (new File("userinfo.txt"));
you need to use something like
InputStream inputStream = new BufferedInputStream(new FileInputStream(new File("userinfo.txt"))));
inputStream.skip(<number of bytes to start of first desired record>);
Scanner read = new Scanner(inputStream);
// then make sure you only read as many records as you need
Search for more information about InputStreams and Readers.
The problem is, given your record format, there is no way to get the correct argument to skip without reading the previous part of the file (though you only need to look for newlines, not for , or |). You could make start and end number of bytes instead of number of records, but then you need to be aware you are likely to land in the middle of a record and be careful.
Also, if you want the final ArrayList to be in order, then the second thread will have to wait until the first thread is done inserting. If you don't, make sure to synchronize access to it or use https://docs.oracle.com/javase/8/docs/api/java/util/concurrent/ConcurrentLinkedQueue.html instead.
I have to analyze different log files which include retrieving time-stamp, URL, etc.
I am using multithreading for this. Each thread is accessing different log file and doing the task.
Program for doing it :
public class checkMultithreadedThroughput{
public static void main(String args[]){
ArrayList<String> fileNames = new ArrayList<>();
fileNames.add("log1");
fileNames.add("log2");
fileNames.add("log3");
fileNames.add("log4");
fileNames.add("log5");
fileNames.add("log6");
fileNames.add("log7");
fileNames.add("log8");
fileNames.add("log9");
Thread[] threads = new Thread[fileNames.size()];
try{
for(int i=0; i<fileNames.size(); i++){
threads[i] = new MultithreadedThroughput(fileNames.get(i));
threads[i].start();
}
}catch(Exception e){
e.printStackTrace();
}
}
}
class MultithreadedThroughput extends Thread{
String filename = null;
MultithreadedThroughput(String filename){
this.filename = filename;
}
public void run(){
calculateThroughput();
}
public void calculateThroughput(){
String line = null;
BufferedReader br = null;
try{
br = new = new BufferedReader(new FileReader(new File(filename)));
while((line = br.readLine())!=null){
//do the analysis on line
}
}catch(Exception e){
e.printStackTrace();
}
}
}
Now in the MultithreadedThroughput class which extends Thread I am reading file using BufferedReader. The whole process takes around 15 minutes (file size is big around 2GB each).
I want to optimize the program in such a way that it takes less time.
The solution which I thought instead of starting threads on all log files, I will take one large log file at a time, split the large file into chunks (number of chunks equal to number of processor) and then starting threads on them OR other solution to have the same program as previous but instead of reading one line at a time, read multiple lines at a time and do the analysis.But I dont know any of them.
Please explain the solution.
In the calculateThroughput method I have to estimate throughput of a URL in per hour interval. So suppose if I break the files depending on number of processor then it may break in between one interval i.e.
Suppose interval start at 06.00.00 till 07:00:00 (one interval) like this their will be 24 interval (one day) in each log file. So if I break a large log file it may break in between an interval and if it does that one interval calculating how I will do. That's the problem I am facing with splitting of file.
I would not try and split a single file for multiple threads. This will create overhead and can't be better than doing several files in parallel.
Create the BufferedReader with a substantial buffer size, e.g. 64k or bigger. The optimum is system-dependent - you'll have to experiment. Later (due to a comment from OP:) The buffer size does not affect the application logic - data is read line by line, and the step from one hour into the next must be handled anyway by carrying the line over into the next batch.
There is no point in reading several lines at a time - readLine just fetches a line from the buffer.
Very likely you are losing time in the analysis.
I don't think you can do the job faster because more threads do not help if your processor doesn't have enough cores.
I need the advice from someone who knows Java very well and the memory issues.
I have a large file (something like 1.5GB) and I need to cut this file in many (100 small files for example) smaller files.
I know generally how to do it (using a BufferedReader), but I would like to know if you have any advice regarding the memory, or tips how to do it faster.
My file contains text, it is not binary and I have about 20 character per line.
To save memory, do not unnecessarily store/duplicate the data in memory (i.e. do not assign them to variables outside the loop). Just process the output immediately as soon as the input comes in.
It really doesn't matter whether you're using BufferedReader or not. It will not cost significantly much more memory as some implicitly seem to suggest. It will at highest only hit a few % from performance. The same applies on using NIO. It will only improve scalability, not memory use. It will only become interesting when you've hundreds of threads running on the same file.
Just loop through the file, write every line immediately to other file as you read in, count the lines and if it reaches 100, then switch to next file, etcetera.
Kickoff example:
String encoding = "UTF-8";
int maxlines = 100;
BufferedReader reader = null;
BufferedWriter writer = null;
try {
reader = new BufferedReader(new InputStreamReader(new FileInputStream("/bigfile.txt"), encoding));
int count = 0;
for (String line; (line = reader.readLine()) != null;) {
if (count++ % maxlines == 0) {
close(writer);
writer = new BufferedWriter(new OutputStreamWriter(new FileOutputStream("/smallfile" + (count / maxlines) + ".txt"), encoding));
}
writer.write(line);
writer.newLine();
}
} finally {
close(writer);
close(reader);
}
First, if your file contains binary data, then using BufferedReader would be a big mistake (because you would be converting the data to String, which is unnecessary and could easily corrupt the data); you should use a BufferedInputStream instead. If it's text data and you need to split it along linebreaks, then using BufferedReader is OK (assuming the file contains lines of a sensible length).
Regarding memory, there shouldn't be any problem if you use a decently sized buffer (I'd use at least 1MB to make sure the HD is doing mostly sequential reading and writing).
If speed turns out to be a problem, you could have a look at the java.nio packages - those are supposedly faster than java.io,
You can consider using memory-mapped files, via FileChannels .
Generally a lot faster for large files. There are performance trade-offs that could make it slower, so YMMV.
Related answer: Java NIO FileChannel versus FileOutputstream performance / usefulness
This is a very good article:
http://java.sun.com/developer/technicalArticles/Programming/PerfTuning/
In summary, for great performance, you should:
Avoid accessing the disk.
Avoid accessing the underlying operating system.
Avoid method calls.
Avoid processing bytes and characters individually.
For example, to reduce the access to disk, you can use a large buffer. The article describes various approaches.
Does it have to be done in Java? I.e. does it need to be platform independent? If not, I'd suggest using the 'split' command in *nix. If you really wanted, you could execute this command via your java program. While I haven't tested, I imagine it perform faster than whatever Java IO implementation you could come up with.
You can use java.nio which is faster than classical Input/Output stream:
http://java.sun.com/javase/6/docs/technotes/guides/io/index.html
Yes.
I also think that using read() with arguments like read(Char[], int init, int end) is a better way to read a such a large file
(Eg : read(buffer,0,buffer.length))
And I also experienced the problem of missing values of using the BufferedReader instead of BufferedInputStreamReader for a binary data input stream. So, using the BufferedInputStreamReader is a much better in this like case.
package all.is.well;
import java.io.IOException;
import java.io.RandomAccessFile;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import junit.framework.TestCase;
/**
* #author Naresh Bhabat
*
Following implementation helps to deal with extra large files in java.
This program is tested for dealing with 2GB input file.
There are some points where extra logic can be added in future.
Pleasenote: if we want to deal with binary input file, then instead of reading line,we need to read bytes from read file object.
It uses random access file,which is almost like streaming API.
* ****************************************
Notes regarding executor framework and its readings.
Please note :ExecutorService executor = Executors.newFixedThreadPool(10);
* for 10 threads:Total time required for reading and writing the text in
* :seconds 349.317
*
* For 100:Total time required for reading the text and writing : seconds 464.042
*
* For 1000 : Total time required for reading and writing text :466.538
* For 10000 Total time required for reading and writing in seconds 479.701
*
*
*/
public class DealWithHugeRecordsinFile extends TestCase {
static final String FILEPATH = "C:\\springbatch\\bigfile1.txt.txt";
static final String FILEPATH_WRITE = "C:\\springbatch\\writinghere.txt";
static volatile RandomAccessFile fileToWrite;
static volatile RandomAccessFile file;
static volatile String fileContentsIter;
static volatile int position = 0;
public static void main(String[] args) throws IOException, InterruptedException {
long currentTimeMillis = System.currentTimeMillis();
try {
fileToWrite = new RandomAccessFile(FILEPATH_WRITE, "rw");//for random write,independent of thread obstacles
file = new RandomAccessFile(FILEPATH, "r");//for random read,independent of thread obstacles
seriouslyReadProcessAndWriteAsynch();
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
Thread currentThread = Thread.currentThread();
System.out.println(currentThread.getName());
long currentTimeMillis2 = System.currentTimeMillis();
double time_seconds = (currentTimeMillis2 - currentTimeMillis) / 1000.0;
System.out.println("Total time required for reading the text in seconds " + time_seconds);
}
/**
* #throws IOException
* Something asynchronously serious
*/
public static void seriouslyReadProcessAndWriteAsynch() throws IOException {
ExecutorService executor = Executors.newFixedThreadPool(10);//pls see for explanation in comments section of the class
while (true) {
String readLine = file.readLine();
if (readLine == null) {
break;
}
Runnable genuineWorker = new Runnable() {
#Override
public void run() {
// do hard processing here in this thread,i have consumed
// some time and ignore some exception in write method.
writeToFile(FILEPATH_WRITE, readLine);
// System.out.println(" :" +
// Thread.currentThread().getName());
}
};
executor.execute(genuineWorker);
}
executor.shutdown();
while (!executor.isTerminated()) {
}
System.out.println("Finished all threads");
file.close();
fileToWrite.close();
}
/**
* #param filePath
* #param data
* #param position
*/
private static void writeToFile(String filePath, String data) {
try {
// fileToWrite.seek(position);
data = "\n" + data;
if (!data.contains("Randomization")) {
return;
}
System.out.println("Let us do something time consuming to make this thread busy"+(position++) + " :" + data);
System.out.println("Lets consume through this loop");
int i=1000;
while(i>0){
i--;
}
fileToWrite.write(data.getBytes());
throw new Exception();
} catch (Exception exception) {
System.out.println("exception was thrown but still we are able to proceeed further"
+ " \n This can be used for marking failure of the records");
//exception.printStackTrace();
}
}
}
Don't use read without arguments.
It's very slow.
Better read it to buffer and move it to file quickly.
Use bufferedInputStream because it supports binary reading.
And it's all.
Unless you accidentally read in the whole input file instead of reading it line by line, then your primary limitation will be disk speed. You may want to try starting with a file containing 100 lines and write it to 100 different files one line in each and make the triggering mechanism work on the number of lines written to the current file. That program will be easily scalable to your situation.
Ok. I am supposed to write a program to take a 20 GB file as input with 1,000,000,000 records and create some kind of an index for faster access. I have basically decided to split the 1 bil records into 10 buckets and 10 sub-buckets within those. I am calculating two hash values for the record to locate its appropriate bucket. Now, i create 10*10 files, one for each sub-bucket. And as i hash the record from the input file, i decide which of the 100 files it goes to; then append the record's offset to that particular file.
I have tested this with a sample file with 10,000 records. I have repeated the process 10 times. Effectively emulating a 100,000 record file. For this it takes me around 18 seconds. This means its gonna take me forever to do the same for a 1 bil record file.
Is there anyway i can speed up/ optimize my writing.
And i am going through all this because i can't store all the records in main memory.
import java.io.*;
// PROGRAM DOES THE FOLLOWING
// 1. READS RECORDS FROM A FILE.
// 2. CALCULATES TWO SETS OF HASH VALUES N, M
// 3. APPENDING THE OFFSET OF THAT RECORD IN THE ORIGINAL FILE TO ANOTHER FILE "NM.TXT" i.e REPLACE THE VALUES OF N AND M.
// 4.
class storage
{
public static int siz=10;
public static FileWriter[][] f;
}
class proxy
{
static String[][] virtual_buffer;
public static void main(String[] args) throws Exception
{
virtual_buffer = new String[storage.siz][storage.siz]; // TEMPORARY STRING BUFFER TO REDUCE WRITES
String s,tes;
for(int y=0;y<storage.siz;y++)
{
for(int z=0;z<storage.siz;z++)
{
virtual_buffer[y][z]=""; // INITIALISING ALL ELEMENTS TO ZERO
}
}
int offset_in_file = 0;
long start = System.currentTimeMillis();
// READING FROM THE SAME IP FILE 20 TIMES TO EMULATE A SINGLE BIGGER FILE OF SIZE 20*IP FILE
for(int h=0;h<20;h++){
BufferedReader in = new BufferedReader(new FileReader("outTest.txt"));
while((s = in.readLine() )!= null)
{
tes = (s.split(";"))[0];
int n = calcHash(tes); // FINDING FIRST HASH VALUE
int m = calcHash2(tes); // SECOND HASH
index_up(n,m,offset_in_file); // METHOD TO WRITE TO THE APPROPRIATE FILE I.E NM.TXT
offset_in_file++;
}
in.close();
}
System.out.println(offset_in_file);
long end = System.currentTimeMillis();
System.out.println((end-start));
}
static int calcHash(String s) throws Exception
{
char[] charr = s.toCharArray();;
int i,tot=0;
for(i=0;i<charr.length;i++)
{
if(i%2==0)tot+= (int)charr[i];
}
tot = tot % storage.siz;
return tot;
}
static int calcHash2(String s) throws Exception
{
char[] charr = s.toCharArray();
int i,tot=1;
for(i=0;i<charr.length;i++)
{
if(i%2==1)tot+= (int)charr[i];
}
tot = tot % storage.siz;
if (tot<0)
tot=tot*-1;
return tot;
}
static void index_up(int a,int b,int off) throws Exception
{
virtual_buffer[a][b]+=Integer.toString(off)+"'"; // THIS BUFFER STORES THE DATA TO BE WRITTEN
if(virtual_buffer[a][b].length()>2000) // TO A FILE BEFORE WRITING TO IT, TO REDUCE NO. OF WRITES
{ .
String file = "c:\\adsproj\\"+a+b+".txt";
new writethreader(file,virtual_buffer[a][b]); // DOING THE ACTUAL WRITE PART IN A THREAD.
virtual_buffer[a][b]="";
}
}
}
class writethreader implements Runnable
{
Thread t;
String name, data;
writethreader(String name, String data)
{
this.name = name;
this.data = data;
t = new Thread(this);
t.start();
}
public void run()
{
try{
File f = new File(name);
if(!f.exists())f.createNewFile();
FileWriter fstream = new FileWriter(name,true); //APPEND MODE
fstream.write(data);
fstream.flush(); fstream.close();
}
catch(Exception e){}
}
}
Consider using VisualVM to pinpoint the bottlenecks. Everything else below is based on guesswork - and performance guesswork is often really, really wrong.
I think you have two issues with your write strategy.
The first is that you're starting a new thread on each write; the second is that you're re-opening the file on each write.
The thread problem is especially bad, I think, because I don't see anything preventing one thread writing on a file from overlapping with another. What happens then? Frankly, I don't know - but I doubt it's good.
Consider, instead, creating an array of open files for all 100. Your OS may have a problem with this - but I think probably not. Then create a queue of work for each file. Create a set of worker threads (100 is too many - think 10 or so) where each "owns" a set of files that it loops through, outputting and emptying the queue for each file. Pay attention to the interthread interaction between queue reader and writer - use an appropriate queue class.
I would throw away the entire requirement and use a database.