I'm currently trying to developp a simple software which retrieves articles on a nntp server. I'm using NNTPClient from apache.commons.net.
When I retrieve all the segments of an article, segments are longer than expected and I cannot decode them (and merrge them) with an yDec soft (like this one).
Here's my code which downloads segments and write them on the HDD :
BufferedReader br;
String line;
List<File> files = new ArrayList<File>();
for(NzbSegment s : segments) {
String str = s.getMessageID();
br = (BufferedReader) client.retrieveArticleBody("<" + str + ">");
String filePath = fileName + "-" + s.getSegmentNumber() +"body.yenc";
File f = new File(filePath);
f.delete(); //Make sure we have a new clean file
f = new File(filePath);
int bytes = 0;
while ((line = br.readLine()) != null) {
FileUtils.writeStringToFile(f,line + "\n",true);
bytes += line.getBytes().length;
}
System.out.println("size : " + s.getBytes() + " compare to : " + bytes);
br.close();
files.add(f);
}
with a POJO NzbSegment :
public class NzbSegment {
private int bytes;
private int segmentNumber;
private String messageID;}
Do you know where am I mistaken ?
Related
I created the following code to read a CSV-file:
public void read(String csvFile) {
try {
File file = new File(csvFile);
FileReader fr = new FileReader(file);
BufferedReader br = new BufferedReader(fr);
String line = "";
String[] tempArr;
while((line = br.readLine()) != null) {
tempArr = line.split(ABSTAND);
anzahl++;
for(String tempStr : tempArr) {
System.out.print(tempStr + " ");
}
System.out.println();
}
br.close();
} catch(IOException ioe) {
ioe.printStackTrace();
}
}
I have a CSV with more than 300'000 lines that look like that:
{9149F314-862B-4DBC-B291-05A083658D69};Gebaeude;TLM_GEBAEUDE;;Schiessstand;{41C949A2-9F7B-41EE-93FD-631B76F2176D};Altdorf 300m;offiziell;Hochdeutsch inkl. Lokalsprachen;Einfacher Name;;684600;295930;400
How can I now only get the some parts out of that? I only need the bold/italic parts to work with.
Without further specifying what your requirements/input limitations are the following should work within your loop.
String str = "{9149F314-862B-4DBC-B291-05A083658D69};Gebaeude;TLM_GEBAEUDE;;Schiessstand;{41C949A2-9F7B-41EE-93FD-631B76F2176D};Altdorf 300m;offiziell;Hochdeutsch inkl. Lokalsprachen;Einfacher Name;;684600;295930;400";
String[] arr = str.split("[; ]", -1);
int cnt=0;
// for (String a : arr)
// System.out.println(cnt++ + ": " + a);
System.out.println(arr[6] + ", " + arr[15] + ", " + arr[16]);
Note that this assumes your delimiters are either a semicolon or a space and that the fields desired are in the fix positions (6, 15, 16).
Result:
Altdorf, 684600, 295930
I am writing to CSV file the BLE scanned results. What I am doing currently is writing all the data one below another.
The data consists of device name, rssi and mac address. For example, the CSV file looks like this -
DeviceA -85 DS:DA:AB:2B:B4:AE
DeviceB -100 2C:18:0B:2B:96:9E
DeviceA -85 DS:DA:AB:2B:B4:AE
My requireemnt is to write like this -
DeviceA -85 DS:DA:AB:2B:B4:AE DeviceB -100 2C:18:0B:2B:96:9E
DeviceA -85 DS:DA:AB:2B:B4:AE
After the last column of device A, I need to start with new column of device B instead of writing below device A.
Also for Device C, I want to write it beside Device C...And so on. Here is my code for writing to CSV.
public final String DATA_SEPARATOR = ",";
public final String LINE_SEPARATOR = System
.getProperty("line.separator");
try {
fileName = "test.csv";
path = Environment.getExternalStorageDirectory()
+ File.separator + "Documents";
path += File.separatorChar + "SampleApp";
File file = new File(path, fileName);
new File(path).mkdirs();
file.createNewFile();
fileStream = new OutputStreamWriter(new FileOutputStream(file));
fileStream.write("sep= " + DATA_SEPARATOR + LINE_SEPARATOR);
} catch (IOException e) {
e.printStackTrace();
fileStream = null;
}
private void writeElements(Object... elements) throws IOException {
if (fileStream != null) {
for (Object o : elements) {
fileStream.write(o.toString());
fileStream.write(DATA_SEPARATOR);
}
fileStream.write(LINE_SEPARATOR);
}
}
writeElements(btDeviceName, btRSSIValue, btMacId) is called from bluetoothScan() method every now and then.
How can I achieve writing beside?
Just put 2 on the same line before writing a LINE_SEPARATOR. Change what's in your writeElements to something like this:
private void writeElements(Object... elements) throws IOException {
if (fileStream != null) {
for (int index = 1; index < elements.length + 1; index++) {
String address = elements[index - 1].toString();
fileStream.write(address);
if(index % 2 == 0) fileStream.write(LINE_SEPARATOR);
else fileStream.write(DATA_SEPARATOR);
}
}
}
Testing:
Object[] elements = new Object[4];
elements[0] = "here";
elements[1] = "are";
elements[2] = "some";
elements[3] = "words";
writeElements(elements);
When opening the file:
here,are
some,words
I'm new to java , i need to define counter then write the result in a file
int counter=0;
int resultstweets=0;
fos = new FileOutputStream(new File(
prop.getProperty("PATH_TO_OUTPUT_FILE")));
BufferedReader br = new BufferedReader(new FileReader("path/of/file"));
while ((tweetJson = br.readLine()) != null) {
String result = drpc.execute(TOPOLOGY_NAME, tweetJson);
Status s = null;
try {
s = DataObjectFactory.createStatus(tweetJson);
result = s.getId() + "\t" + s.getText() + "\t" + result;
// this is my counter
resultstweets+=counter;
} catch (TwitterException e) {
LOG.error(e.toString());
}
fos.write(result.getBytes());
fos.write(newLine);
}
fos.write(newLine);
fos.write("Finish: ".getBytes());
fos.write("resultstweets".getBytes());
fos.write(newLine);
// here i write it in the file
fos.write(resultstweets);
but what i got at the end of file
Finish: resultstweets
**\001**459202139258
This method java.io.FileOutputStream.write(byte[] b) you're using in your last line gets a byte array as parameter.
So you should first convert your integer to string and then call getBytes on that:
fos.write(String.valueOf(resultstweets).getBytes());
You can find a proper example of using this method here.
So I have been trying to work this out for a while but unable to come to a "rapid" solution. I do have a solution in place but it takes literaly 3 days for it to complete but unfortunatly that is far too long.
What i am trying to do:
So I have a text file (call this 1.txt) that contains unique time stamps, and have a secondary text file (call this 2.txt) that contains mixed data, and the intention is to read the first time stamp from 1.txt and find the match in 2.txt and output it in a new file, and continually do this. There are approximately 100,000 time stamps in 1.txt and over 11million lines in 2.txt.
What i have achieved:
So far what I got is it gets the first time stamp, and have a nested loop where by it loops through the 11 million lines to find a match. Once match is found, itll store that in a variable, up until it moves onto the next timestamp, where it writes out that data. Solution below:
public class fedOrganiser5 {
private static String directory = "C:\\Users\\xxx\\Desktop\\Files\\";
private static String file = "combined.txt";
private static Integer fileNo = 1;
public static void main(String[] args) throws IOException {
String sCurrentLine = "";
int i = 1;
String mapperValue = "";
String outputFirst = "";
String outputSecond = "";
String outputThird = "";
long timer;
int counter = 0;
int test = 0;
timer = System.currentTimeMillis();
try {
BufferedReader reader = new BufferedReader(new FileReader(directory + "newfile" + fileNo + ".txt"));
BufferedWriter writer = new BufferedWriter(new FileWriter(directory + "final_" + fileNo + ".txt"));
BufferedReader mapper = new BufferedReader(new FileReader(directory + file));
for (sCurrentLine = reader.readLine(); sCurrentLine != null; sCurrentLine = reader.readLine()) {
if (!sCurrentLine.trim().isEmpty() && sCurrentLine.trim().length() > 2) {
sCurrentLine = sCurrentLine.replace(" ", "").replace(", ", "").replace(",", "").replace("[", "");
try {
if (counter>0) {
writer.write(outputFirst + outputSecond + outputThird);
outputFirst = "";
outputSecond = "";
outputThird = "";
counter = 0;
test=0;
i++;
mapper.close();
mapper = new BufferedReader(new FileReader(directory + file));
System.out.println("Writing out details for " + sCurrentLine);
}
for (mapperValue = mapper.readLine(); mapperValue != null; mapperValue = mapper.readLine()) {
test++;
System.out.println("Find match " + i + " - " + test);
if (mapperValue.contains(sCurrentLine)) {
System.out.println("Match found - Mapping " + sCurrentLine + i);
if (mapperValue.contains("[EVENT=agentStateEvent]")) {
outputFirst += mapperValue.trim() + "\r\n";
counter++;
} else if (mapperValue.contains("[EVENT=TerminalConnectionCreated]")) {
outputSecond += mapperValue.trim() + "\r\n";
counter++;
} else {
outputThird += mapperValue.trim() + "\r\n";
counter++;
}
}
}
}
catch (Exception e)
{
System.err.println("Error: "+sCurrentLine + " " + mapperValue);
}
}
}
System.out.println("writing final record out");
writer.write(outputFirst + outputSecond + outputThird);
writer.close();
System.out.println("complete!");
System.out.print("Time taken: " +
((TimeUnit.MILLISECONDS.toMinutes(System.currentTimeMillis())-TimeUnit.MILLISECONDS.toMinutes(timer)))
+ " minutes");
}
catch (Exception e)
{
System.err.println("Error: Target File Cannot Be Read");
}
}
}
The problem?
I have tried looking through other solutions on google and forums but unable to seek a suitable or a faster approach to do this (or its something thats beyond my depth of knowledge). Looping through 11million lines for every time stamp takes approximately 10 minutes, and with 10,000 timestamps, you can imagine how long the process will take. Can someone provide me some friendly advice of where to look or any APIs that can speed this process up?
Want to thank everyone for their suggestions. Will certainly try the database method proposed by Roman as it may be the quickest for the type of work I am trying to do, but if no success will try the other solutions proposed :)
I am trying to read an ascii file and recognize the position of newline character "\n" as to know which and how many characters i have in every line.The file size is 538MB. When i run the below code it never prints me anything.
I search a lot but i didn't find anything for ascii files. I use netbeans and Java 8. Any ideas??
Below is my code.
String inputFile = "C:\myfile.txt";
FileInputStream in = new FileInputStream(inputFile);
FileChannel ch = in.getChannel();
int BUFSIZE = 512;
ByteBuffer buf = ByteBuffer.allocateDirect(BUFSIZE);
Charset cs = Charset.forName("ASCII");
while ( (rd = ch.read( buf )) != -1 ) {
buf.rewind();
CharBuffer chbuf = cs.decode(buf);
for ( int i = 0; i < chbuf.length(); i++ ) {
if (chbuf.get() == '\n'){
System.out.println("PRINT SOMETHING");
}
}
}
Method to store the contents of a file to a string:
static String readFile(String path, Charset encoding) throws IOException
{
byte[] encoded = Files.readAllBytes(Paths.get(path));
return new String(encoded, encoding);
}
Here's a way to find the occurrences of a character in the entire string:
public static void main(String [] args) throws IOException
{
List<Integer> indexes = new ArrayList<Integer>();
String content = readFile("filetest", StandardCharsets.UTF_8);
int index = content.indexOf('\n');
while (index >= 0)
{
indexes.add(index);
index = content.indexOf('\n', index + 1);
}
}
Found here and here.
The number of characters in a line is the length of the string read by a readLine call:
try (BufferedReader br = new BufferedReader(new FileReader(file))) {
int iLine = 0;
String line;
while ((line = br.readLine()) != null) {
System.out.println( "Line " + iLine + " has " +
line.length() + " characters." );
iLine++;
}
} catch( IOException ioe ){
// ...
}
Note that the (system-dependent) line end marker has been stripped from the string by readLine.
If a very large file contains no newlines, it is indeed possible to run out of memory. Reading character by character will avoid this.
File file = new File( "Z.java" );
Reader reader = new FileReader(file);
int len = 0;
int c;
int iLine = 0;
while( (c = reader.read()) != -1) {
if( c == '\n' ){
iLine++;
System.out.println( "line " + iLine + " contains " +
len + " characters" );
len = 0;
} else {
len++;
}
}
reader.close();
You should user FileReader which is convenience class for reading character files.
FileInputStream javs docs clearly states
FileInputStream is meant for reading streams of raw bytes such as
image data. For reading streams of characters, consider using
FileReader.
Try below
try (BufferedReader br = new BufferedReader(new FileReader(file))) {
String line;
while ((line = br.readLine()) != null) {
for (int pos = line.indexOf("\n"); pos != -1; pos = line.indexOf("\n", pos + 1)) {
System.out.println("\\n at " + pos);
}
}
}