the swt file dialog will give me an empty result array if I select too much files (approx. >2500files). The listing shows you how I use this dialog. If i select too many sound files, the syso will show 0. Debugging tells me, that the files array is empty in this case. Is there any way to get this work?
FileDialog fileDialog = new FileDialog(mainView.getShell(), SWT.MULTI);
fileDialog.setText("Choose sound files");
fileDialog.setFilterExtensions(new String[] { new String("*.wav") });
Vector<String> result = new Vector<String>();
fileDialog.open();
String[] files = fileDialog.getFileNames();
for (int i = 0, n = files.length; i < n; i++) {
if( !files[i].contains(".wav")) {
System.out.println(files[i]);
}
StringBuffer stringBuffer = new StringBuffer();
stringBuffer.append(fileDialog.getFilterPath());
if (stringBuffer.charAt(stringBuffer.length() - 1) != File.separatorChar) {
stringBuffer.append(File.separatorChar);
}
stringBuffer.append(files[i]);
stringBuffer.append("");
String finalName = stringBuffer.toString();
if( !finalName.contains(".wav")) {
System.out.println(finalName);
}
result.add(finalName);
}
System.out.println(result.size())
;
I've looked at the FileDialog source code and I'm afraid, there is an upper boundary. A 32kB byte buffer for all 0-terminated filenames (if I understood it correctly).
So calculating with your values, if the medium size of your filname strings is around 12 characters, then you've hit exactly that upper boundary.
So the only way out is to select the files in two or more steps.
Related
I'm trying to figure out how to make a JProgressBar fill up as a file is being read. More specifically I need to read in 2 files and fill 2 JProgressBars, and then stop after one of the files has been read.
I am having trouble understanding how to make that work with a file. With two threads I would just put a for(int i = 0; i < 100; i++)loop and setValue(i) to get the current progress. But with files I don't know how to set the progress. Maybe get the size of the file and try something with that? I am really not sure and was hoping someone could throw and idea or two my way.
Thank you!
Update for future readers:
I managed to solve it by using a file.length() which returned the size of the file in bytes, then setting the bar to go from 0 to that size instead of the regular 100, and then using
for(int i = 0; i < fileSize; i++)
To get the bar loading like it should.
Example usage of ProgressMonitorInputStream. It automatically display simple dialog with progressbar if reading from InputStream takes longer - you can adjust that time by using: setMillisToPopup, setMillisToDecideToPopup.
public static void main(String[] args) {
JFrame mainFrame = new JFrame();
mainFrame.setSize(640, 480);
mainFrame.setDefaultCloseOperation(JFrame.DISPOSE_ON_CLOSE);
mainFrame.setVisible(true);
String filename = "Path to your filename"; // replace with real filename
File file = new File(filename);
try (FileInputStream inputStream = new FileInputStream(file);
ProgressMonitorInputStream progressInputStream = new ProgressMonitorInputStream(mainFrame, "Reading file: " + filename, inputStream)) {
byte[] buffer = new byte[10]; // Make this number bigger - 10240 bytes for example. 10 is there to show how that dialog looks like
long totalReaded = 0;
long totalSize = file.length();
int readed = 0;
while((readed = progressInputStream.read(buffer)) != -1) {
totalReaded += readed;
progressInputStream.getProgressMonitor().setNote(String.format("%d / %d kB", totalReaded / 1024, totalSize / 1024));
// Do something with data in buffer
}
} catch(IOException ex) {
System.err.println(ex);
}
}
Im saving (generated) Word documents as file via jacob by printing them into a file (i have to do it like this because the file is required from legacy programms)
The problem is if i do this for multiple files, every second file is not written correctly.
The first file is ok.
2nd is only written about 80% of the file.
3rd is ok
4th is the same as the 2nd (exactly the same filesize as the 2nd)
... and so on
This is my code.
public static void main(String args[]) {
Variant background = new Variant(false);
Variant append = new Variant(false);
Variant range = new Variant(0);//wdPrintAllDocument
ActiveXComponent oleComponent = new ActiveXComponent("Word.Application");
Variant var = Dispatch.get(oleComponent, "Documents");
Dispatch document = var.getDispatch();
Dispatch doc = Dispatch.call(document, "Open", "c:/temp/Test.rtf").toDispatch();
for (int i = 0; i < 10; i++) {
Dispatch.call(doc, "PrintOut", background, append, range, new Variant("c:/temp/test" + i));
}
Dispatch.call(doc, "Close", 0);
Dispatch.call(oleComponent, "Quit");
}
The problem appears on different printers (the pdf printer for example works)
why every 2nd file? word problem? printer (driver) problem?
help is very much appreciated
I have a code that reads a file using buffered reader and split, said file was created via a method that automatically adds 4KB of empty space at the beginning of the file, this results in when I read the following happens:
First the Code:
BufferedReader metaRead = new BufferedReader(new FileReader(metaFile));
String metaLine = "";
String [] metaData = new String [100000];
while ((metaLine = metaRead.readLine()) != null){
metaData = metaLine.split(",");
for (int i = 0; i < metaData.length; i++){
System.out.println(metaData[i]);
}
}
This is the result, keep in mind this file already exists and contains the values:
//4096 spaces then the first actual word in the document which is --> testTable2
Name
java.lang.String
true
No Reference
Is there a way to skip the first 4096 spaces, and get straight to the actual value within the file so I can get the result regularly? Because I'll be using the metaData array later in other operations, and I'm pretty sure the spaces will mess up the number of slots within the array. Any suggestions would be appreciated.
If you're using Eclipse, the auto-completion should help.
metaRead.skip(4096);
https://docs.oracle.com/javase/7/docs/api/java/io/BufferedReader.html
You could (as mentioned) simply do:
metaRead.skip(4096);
If the whitespace always occupies that many characters, or you could just avoid lines which are empty
while ((metaLine = metaRead.readLine()) != null){
if(metaLine.trim().length() > 0){
metaData = metaLine.split(",");
for (int i = 0; i < metaData.length; i++){
System.out.println(metaData[i]);
}
}
}
I'm trying to tokenize a large amount of text in Java. When I say large, I mean entire chapters of books at a time. I wrote the first draft of my code by using a single page from a book and everything worked fine. Now that I'm trying to process entire chapters things aren't working. It processes part of the chapter correctly and then it just stops.
Below is all of the relevant code
File folder = new File(Constants.rawFilePath("eng"));
FileHelper fileHelper = new FileHelper();
BPage firstChapter = new BPage();
BPage firstChapterSpanish = new BPage();
File[] allFiles = folder.listFiles();
//read the files into memory
ArrayList<ArrayList<String>> allPages = new ArrayList<ArrayList<String>>();
//for the english
for(int i=0;i<allFiles.length;i++)
{
String filePath = Constants.rawFilePath("/eng/metamorph_eng_"+String.valueOf(i)+".txt");
ArrayList<String> pageToAdd = fileHelper.readFileToMemory(filePath);
allPages.add(pageToAdd);
}
String allPagesAsString = "";
for(int i=0;i<allPages.size();i++)
{
allPagesAsString = allPagesAsString+fileHelper.turnListToString(allPages.get(i));
}
firstChapter.setUnTokenizedPage(allPagesAsString);
firstChapter.tokenize(Languages.ENGLISH);
folder = new File(Constants.rawFilePath("spa"));
allFiles = folder.listFiles();
//for the spanish
for(int i=0;i<allFiles.length;i++)
{
String filePath = Constants.rawFilePath("/eng/metamorph_eng_"+String.valueOf(i)+".txt");
ArrayList<String> pageToAdd = fileHelper.readFileToMemory(filePath);
allPages.add(pageToAdd);
}
allPagesAsString = "";
for(int i=0;i<allPages.size();i++)
{
allPagesAsString = allPagesAsString+fileHelper.turnListToString(allPages.get(i));
}
firstChapterSpanish.setUnTokenizedPage(allPagesAsString);
firstChapterSpanish.tokenize(Languages.SPANISH);
fileHelper.writeFile(firstChapter.getTokenizedPage(), Constants.partiallyprocessedFilePath("eng_ch_1.txt"));
fileHelper.writeFile(firstChapterSpanish.getTokenizedPage(), Constants.partiallyprocessedFilePath("spa_ch_1.txt"));
}
even though I'm reading all of the files in the directory where I expect my text to be, only the first coups of files are being added to the string that I'm processing. It seems like after a while the code will still run but it only adds characters to my string up to a certain point.
What do I have to change so that I can process all of my files at once?
This part
String allPagesAsString = "";
for(int i=0;i<allPages.size();i++)
{
allPagesAsString = allPagesAsString+
fileHelper.turnListToString(allPages.get(i));
}
will be really slow if your copying larger strings.
Using a StringBuilder will speed things up a bit:
int expectedBookSize = 10000;
StringBuilder allPagesAsString = new StringBuilder(expectedBookSize);
for(int i=0;i<allPages.size();i++)
{
allPagesAsString.append(fileHelper.turnListToString(allPages.get(i)));
}
Can't you process one page at a time? That would be the best solution.
I want to parse below wmi output to hashmap as a key value pair using java.Please give me suggestions ..
My WMI Output contains 2 rows with multiple columns, first row is header and second row contains data. I want either regex or any approach to seperate the header with corresponding data as a key value for hashmap.
I am not getting any idea how to proceed...
Caption Description IdentifyingNumber Name
Computer System Product Computer System Product HP xw4600 Workstation
Parsing output should be like ...
Key = Value
Caption = Computer System Product
Description = Computer System Product
IdentifyingNumber =
Name = HP xw4600 Workstation
If your file format is always the same, you can easy use parser:
FileInputStream fis = new FileInputStream(filename);
InputStreamReader isr = new InputStreamReader(fis);
Reader in = new BufferedReader(isr);
String[] array = new String[2];
for(int i = 0; i < 2; i++)
{
((BufferedReader)in).readLine();
}
for(int i = 0; i < array.length; i++)
{
array[i] = ((BufferedReader)in).readLine();
if(array[i] == null)
{
array[i] = ""; //$NON-NLS-1$
}
}
in.close();
String[] headers = array[0].split(Pattern.quote("\t")); //$NON-NLS-1$
String[] values = array[1].split(Pattern.quote("\t")); //$NON-NLS-1$
And then running through both and filling hashmap
I formatted the wmi output as a list and now its easy to formate the output.