Getting error on the return line and also not sure if the checking for the size is right.
private byte[] Get(String urlIn)
{
URL url = null;
String urlStr = urlIn;
if (urlIn!=null)
urlStr=urlIn;
try
{
url = new URL(urlStr);
} catch (MalformedURLException e)
{
e.printStackTrace();
return null;
}
HttpURLConnection urlConnection = null;
try
{
urlConnection = (HttpURLConnection) url.openConnection();
InputStream in = new BufferedInputStream(urlConnection.getInputStream());
byte[] buf=new byte[10*1024];
int szRead = in.read(buf);
byte[] bufOut;
if (szRead==10*1024)
{
throw new AndroidRuntimeException("the returned data is bigger than 10*1024.. we don't handle it..");
}
else
{
if (szRead > 0) {
bufOut = Arrays.copyOf(buf, szRead);
}
}
return bufOut;
}
catch (IOException e)
{
e.printStackTrace();
return null;
}
finally
{
if (urlConnection!=null)
urlConnection.disconnect();
}
}
The part:
if (szRead > 0) {
bufOut = Arrays.copyOf(buf, szRead);
}
Not sure if this is the right way to check it in java and also when doing this i'm getting error on the variable bufOut might not be init in this line:
return bufOut;
Why don't you try returning the value returned by Arrays.copyOf() rather than store it in a variable and then return it.
You get the possibly not initialised error because if szRead is zero or less, bufOut is not initialised. According to the specification, if szRead is negative (for example, if the file is at EOF when calling read()), it will throw an exception, so you could do either:
if (szRead==10*1024)
{
throw new AndroidRuntimeException("the returned data is bigger than 10*1024.. we don't handle it..");
}
else
{
bufOut = Arrays.copyOf(buf, szRead);
}
return bufOut;
And catch the NegativeArraySizeException, or throw another AndroidRuntimeException for szRead < 0, or just do:
bufOut = Arrays.copyOf(buf, szRead < 0 ? 0 : szRead);
Related
Question at the bottom
I'm using netty to transfer a file to another server.
I limit my file-chunks to 1024*64 bytes (64KB) because of the WebSocket protocol. The following method is a local example what will happen to the file:
public static void rechunck(File file1, File file2) {
FileInputStream is = null;
FileOutputStream os = null;
try {
byte[] buf = new byte[1024*64];
is = new FileInputStream(file1);
os = new FileOutputStream(file2);
while(is.read(buf) > 0) {
os.write(buf);
}
} catch (IOException e) {
Controller.handleException(Thread.currentThread(), e);
} finally {
try {
if(is != null && os != null) {
is.close();
os.close();
}
} catch (IOException e) {
Controller.handleException(Thread.currentThread(), e);
}
}
}
The file is loaded by the InputStream into a ByteBuffer and directly written to the OutputStream.
The content of the file cannot change while this process.
To get the md5-hashes of the file I've wrote the following method:
public static String checksum(File file) {
InputStream is = null;
try {
is = new FileInputStream(file);
MessageDigest digest = MessageDigest.getInstance("MD5");
byte[] buffer = new byte[8192];
int read = 0;
while((read = is.read(buffer)) > 0) {
digest.update(buffer, 0, read);
}
return new BigInteger(1, digest.digest()).toString(16);
} catch(IOException | NoSuchAlgorithmException e) {
Controller.handleException(Thread.currentThread(), e);
} finally {
try {
is.close();
} catch(IOException e) {
Controller.handleException(Thread.currentThread(), e);
}
}
return null;
}
So: just in theory it should return the same hash, shouldn't it? The problem is that it returns two different hashes that do not differ with every run.. file size stays the same and the content either.
When I run the method once for in: file-1, out: file-2 and again with in: file-2 and out: file-3 the hashes of file-2 and file-3 are the same! This means the method will properly change the file every time the same way.
1. 58a4a9fbe349a9e0af172f9cf3e6050a
2. 7b3f343fa1b8c4e1160add4c48322373
3. 7b3f343fa1b8c4e1160add4c48322373
Here is a little test that compares all buffers if they are equivalent. Test is positive. So there aren't any differences.
File file1 = new File("controller/templates/Example.zip");
File file2 = new File("controller/templates2/Example.zip");
try {
byte[] buf1 = new byte[1024*64];
byte[] buf2 = new byte[1024*64];
FileInputStream is1 = new FileInputStream(file1);
FileInputStream is2 = new FileInputStream(file2);
boolean run = true;
while(run) {
int read1 = is1.read(buf1), read2 = is2.read(buf2);
String result1 = Arrays.toString(buf1), result2 = Arrays.toString(buf2);
boolean test = result1.equals(result2);
System.out.println("1: " + result1);
System.out.println("2: " + result2);
System.out.println("--- TEST RESULT: " + test + " ----------------------------------------------------");
if(!(read1 > 0 && read2 > 0) || !test) run = false;
}
} catch (IOException e) {
e.printStackTrace();
}
Question: Can you help me chunking the file without changing the hash?
while(is.read(buf) > 0) {
os.write(buf);
}
The read() method with the array argument will return the number of files read from the stream. When the file doesn't end exactly as a multiple of the byte array length, this return value will be smaller than the byte array length because you reached the file end.
However your os.write(buf); call will write the whole byte array to the stream, including the remaining bytes after the file end. This means the written file gets bigger in the end, therefore the hash changed.
Interestingly you didn't make the mistake when you updated the message digest:
while((read = is.read(buffer)) > 0) {
digest.update(buffer, 0, read);
}
You just have to do the same when you "rechunk" your files.
Your rechunk method has a bug in it. Since you have a fixed buffer in there, your file is split into ByteArray-parts. but the last part of the file can be smaller than the buffer, which is why you write too many bytes in the new file. and that's why you do not have the same checksum anymore. the error can be fixed like this:
public static void rechunck(File file1, File file2) {
FileInputStream is = null;
FileOutputStream os = null;
try {
byte[] buf = new byte[1024*64];
is = new FileInputStream(file1);
os = new FileOutputStream(file2);
int length;
while((length = is.read(buf)) > 0) {
os.write(buf, 0, length);
}
} catch (IOException e) {
Controller.handleException(Thread.currentThread(), e);
} finally {
try {
if(is != null)
is.close();
if(os != null)
os.close();
} catch (IOException e) {
Controller.handleException(Thread.currentThread(), e);
}
}
}
Due to the length variable, the write method knows that until byte x of the byte array, only the file is off, then there are still old bytes in it that no longer belong to the file.
When i call the http, and during it connection is cutt of..my app crashes.. how can i handle such issue in this code?
This also happens when there is no connection, and the http is called.. also it happens when i returns wrong format ( not json)..
HttpConnection conn = null;
OutputStream os = null;
InputStream is = null;
try {
// construct the URL
String serverURL = "xxxxxxxxxxxxxxxxxxxxxxx"+Common.getConnectionType();
// encode the parameters
URLEncodedPostData postData = new URLEncodedPostData("UTF-8", false);
byte[] postDataBytes = postData.getBytes();
// construct the connection
conn = (HttpConnection)Connector.open(serverURL, Connector.READ_WRITE, true);
conn.setRequestMethod(HttpConnection.POST);
conn.setRequestProperty("User-Agent", "BlackBerry/" + DeviceInfo.getDeviceName() + " Software/" + DeviceInfo.getSoftwareVersion() + " Platform/" + DeviceInfo.getPlatformVersion());
conn.setRequestProperty("Content-Language", "en-US");
conn.setRequestProperty("Content-Type", "application/x-www-form-urlencoded");
conn.setRequestProperty("Content-Length", new Integer(postDataBytes.length).toString());
// write the parameters
os = conn.openOutputStream();
os.write(postDataBytes);
// write the data and get the response code
int rc = conn.getResponseCode();
if(rc == HttpConnection.HTTP_OK) {
// read the response
ByteVector buffer = new ByteVector();
is = conn.openInputStream();
long len = conn.getLength();
int ch = 0;
// read content-length or until connection is closed
if( len != -1) {
for(int i =0 ; i < len ; i++ ) {
if((ch = is.read()) != -1) {
buffer.addElement((byte)ch);
}
}
} else {
while ((ch = is.read()) != -1) {
len = is.available();
buffer.addElement((byte)ch);
}
}
// set the response
accountsResponse = new String(buffer.getArray(), "UTF-8");
} else {
accountsResponse = null;
}
} catch(Exception e){
accountsResponse = null;
} finally {
try {
os.close();
} catch (Exception e) {
// handled by OS
}
try {
is.close();
} catch (Exception e) {
// handled by OS
}
try {
conn.close();
} catch (Exception e) {
// handled by OS
}
}
Iam trying to get a list of FTP files by using
FTPFile[] ftpFiles = ftp.listFiles(READ_DIRECTORY);
There are more than 27553 files in this directory and i expect the number to grow.
Now i need to retrieve one file from this huge list. I am doing the following
for (FTPFile ftpFile : ftpFiles)
{
if(fileName.equalsIgnoreCase(ftpFile.getName())
{
print(ftpFile);
}
}
But lets say the file i want to print is the last file in the 27553 files.. it takes about a minute to go through the whole list checking if its the file im looking for.. not only that.. it also gives me a "org.apache.commons.net.ftp.FTPConnectionClosedException: FTP response 421 received. Server closed connection." after about 900 seconds.
How can i tune this program to find the file faster? I dont want it to run for 900 seconds. Below is the actual method that takes so long. Please suggest how i can reduce the time taken. In debug mode, the method runs hundreds of seconds. On a production server, it takes more than a minute or two which is still not acceptable.
private boolean PDFReport(ActionForm form, HttpServletResponse response,
String fileName, String READ_DIRECTORY) throws Exception
{
boolean test = false;
FTPClient ftp = new FTPClient();
DataSourceReader dsr = new DataSourceReader();
dsr.getFtpLinks();
String ftppassword = dsr.getFtppassword();
String ftpserver = dsr.getFtpserver();
String ftpusername = dsr.getFtpusername();
ftp.connect(ftpserver);
ftp.login(ftpusername, ftppassword);
InputStream is = null;
BufferedInputStream bis = null;
try
{
int reply;
reply = ftp.getReplyCode();
if (!FTPReply.isPositiveCompletion(reply))
{
ftp.disconnect();
System.out.println("FTP server refused connection.");
} else
{
ftp.enterLocalPassiveMode();
FTPFile[] ftpFiles = ftp.listFiles(READ_DIRECTORY);
for (FTPFile ftpFile : ftpFiles)
{
String FilePdf = ftpFile.getName();
ftp.setFileType(FTP.BINARY_FILE_TYPE);
if (FilePdf.equalsIgnoreCase(fileName))
{
String strFile = READ_DIRECTORY + "/" + FilePdf;
boolean fileFormatType = fileName.endsWith(".PDF");
if (fileFormatType)
{
if (FilePdf != null && FilePdf.length() > 0)
{
is = ftp.retrieveFileStream(strFile);
bis = new BufferedInputStream(is);
response.reset();
response.setContentType("application/pdf");
response.setHeader("Content-Disposition",
"inline;filename=example.pdf");
ServletOutputStream outputStream = response
.getOutputStream();
byte[] buffer = new byte[1024];
int readCount;
while ((readCount = bis.read(buffer)) > 0)
{
outputStream
.write(buffer, 0, readCount);
}
outputStream.flush();
outputStream.close();
}
} else
{
if (FilePdf != null && FilePdf.length() > 0)
{
is = ftp.retrieveFileStream(strFile);
bis = new BufferedInputStream(is);
response.reset();
response.setContentType("application/TIFF");
response.setHeader("Content-Disposition",
"inline;filename=example.tiff");
ServletOutputStream outputStream = response
.getOutputStream();
byte[] buffer = new byte[1024];
int readCount;
while ((readCount = bis.read(buffer)) > 0)
{
outputStream
.write(buffer, 0, readCount);
}
outputStream.flush();
outputStream.close();
}
}
test = true;
}
if(test) break;
}
}
} catch (Exception ex)
{
ex.printStackTrace();
System.out.println("Exception ----->" + ex.getMessage());
} finally
{
try
{
if (bis != null)
{
bis.close();
}
if (is != null)
{
is.close();
}
} catch (IOException e)
{
e.printStackTrace();
}
try
{
ftp.disconnect();
ftp = null;
} catch (IOException e)
{
e.printStackTrace();
}
}
return test;
}
Why bother iterating through the full list? You already know what the filename you want is, and you use it when you call is = ftp.retrieveFileStream(strFile);. Just call that directly without ever calling listFiles().
I think it has 2 ways depend on how you gonna used it.
instead of using "listFiles" get a list of file name and info, you use "listName" to get only files' name.
String[] ftpFiles = ftp.listName(READ_DIRECTORY);
// do looping for string array
using FTPFileFilter with listFiles
FTPFileFilter filter = new FTPFileFilter() {
#Override
public boolean accept(FTPFile ftpFile) {
return (ftpFile.isFile() &&
ftpFile.getName().equalsIgnoreCase(fileName);
}
};
FTPFile[] ftpFiles= ftp.listFiles(READ_DIRECTORY, filter);
if (ftpFiles!= null && ftpFiles.length > 0) {
for (FTPFile aFile : ftpFiles) {
print(aFile.getName());
}
}
Do not use a for-each loop. Use a regular for loop where you can actually control the direction.
for(int i = ftpFiles.Length-1; i >= 0; i--) {
print(ftpFiles[i])
}
I want to use the Class of HttpClient to extract the number of google hits for many terms continuously,but the google server don't let me to do this operation repeately,can you help me? here is my program ,the parameter Concept is the term I want to search。
public static double extractGoogleCount(String Concept)
{
double temp = 0;
HttpClient httpClient = new HttpClient();
String url = "http://www.google.com/search?hl=en&newwindow=1&q="
+ Concept + "&aq=f&aqi=&aql=&oq=&gs_rfai=";
GetMethod getMethod = new GetMethod(url);
getMethod.getParams().setParameter(HttpMethodParams.RETRY_HANDLER,
new DefaultHttpMethodRetryHandler());
try
{
int statusCode = httpClient.executeMethod(getMethod);
if (statusCode != HttpStatus.SC_OK)
{
System.err.println("Method failed: "
+ getMethod.getStatusLine() + url);
}
InputStream responseBody = getMethod.getResponseBodyAsStream();
DataInputStream dis = new DataInputStream(responseBody);
String returnPage = dis.readLine();
while (returnPage != null)
{
int index = returnPage.indexOf("<div id=\"resultStats\">");
if (index == -1)
{
returnPage = dis.readLine();
continue;
}
String sub = returnPage.substring(index, index + 100);
if (sub.indexOf("About") >= 0)
{
String[] result = sub.split(" ");
String number = result[2].replaceAll(",", "");
temp = Double.parseDouble(number);
} else
{
String[] result = sub.split(" ");
String number = result[1].substring(result[1]
.indexOf(">") + 1);
System.out.println("number:" + number);
temp = Double.parseDouble(number);
}
break;
}
return temp;
} catch (HttpException e)
{
System.out.println("Please check your provided http address!");
e.printStackTrace();
} catch (IOException e)
{
e.printStackTrace();
}
catch (Exception e)
{
e.printStackTrace();
return temp;
} finally
{
httpClient.getState().clear();
getMethod.releaseConnection();
}
}
Google only allows a certain number of requests per second from a single client.
Try adding:
Thread.sleep(200);
To the code and it should work. You might want to create another thread to do the getting work so you can do other stuff with your program if you need to display this data somehow
What I currently have
I'm currently trying to create a little download manager in Java and I have a problem with writing the loaded bytes in a file. I'm using a DataOutputStream to write the byte-array which I read from a DataInputStream. Here is the class I created to do that:
public class DownloadThread extends Thread{
private String url_s;
private File datei;
public DownloadThread(String url_s, File datei){
this.url_s = url_s;
this.datei = datei;
}
public void run(){
// Connect:
int size = 0;
URLConnection con = null;
try {
URL url = new URL(url_s);
con = url.openConnection();
size = con.getContentLength();
// Set Min and Max of the JProgressBar
prog.setMinimum(0);
prog.setMaximum(size);
} catch (MalformedURLException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
// Download:
if (con != null || size != 0){
byte[] buffer = new byte[4096];
DataInputStream down_reader = null;
// Output:
DataOutputStream out = null;
try {
out = new DataOutputStream(new FileOutputStream(datei));
} catch (FileNotFoundException e1) {
e1.printStackTrace();
}
// Load:
try {
down_reader = new DataInputStream(con.getInputStream());
int byte_counter = 0;
int tmp = 0;
int progress = 0;
// Read:
while (true){
tmp = down_reader.read(buffer);
// Check for EOF
if (tmp == -1){
break;
}
out.write(buffer);
out.flush();
// Set Progress:
byte_counter += tmp;
progress = (byte_counter * 100) / size;
prog.setValue( byte_counter );
prog.setString(progress+"% - "+byte_counter+"/"+size+" Bytes");
}
// Check Filesize:
prog.setString("Checking Integrity...");
if (size == out.size()){
prog.setString("Integrity Check passed!");
} else {
prog.setString("Integrity Check failed!");
System.out.println("Size: "+size+"\n"+
"Read: "+byte_counter+"\n"+
"Written: "+out.size() );
}
// ENDE
} catch (IOException e) {
e.printStackTrace();
} finally{
try {
out.close();
down_reader.close();
} catch (IOException e) {
e.printStackTrace();
}
}
}
// Clean Up...
load.setEnabled(true);
try {
this.join();
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
This is currently an inner-class and the prog-Object is a JProgressBar from it's mother-class, so it can be accessed directly.
Example
I'm trying to download the Windows .exe Version of the TinyCC, which should be 281KB size. The file i downloaded with my download manager is 376KB big.
The Output from the Script looks like this:
Size: 287181
Read: 287181
Written: 385024
So it seems that the read bytes match the file-size but there are more bytes written. What am I missing here?
This is wrong:
out.write(buffer);
It should be
out.write(buffer, 0, tmp);
You need to specify how many bytes to write, a read doesn't always read a full buffer.
Memorize this. It is the canonical way to copy a stream in Java.
int count;
while ((count = in.read(buffer)) > 0)
{
out.write(buffer, 0, count);
}