I am having a problem finding the right encoding for a file that is saved to the database through FileUpload from asp.net on sql server 2008 with type Image. I need to migrate a web system to an Android application using a webservice in asp.net for communication with this sql server database, but the saved format is not corresponding with the already saved files. I am not understanding whether it is a question of encoding (ASCII, UTF-8, ...) or problem with base64 encode and decode, or whether it would be more appropriate to treat the file as hexadecimal.
The file is read by the system through the fileupload component:
file.ARCHIVE = FileUpload1.FileBytes and then has the save: context.SaveChanges ()
the type expected by the database is Image and the web system reads the file normally after saving.
I need to do the same process through a native Android application (java), so I read the file, convert it to base64 to send it to the webservice that does the decode of the file and saves it in the same type in the database. When I compare the string of the file the string is different, so the web system understands the file to be corrupted, even though the application reads normally.
I already tested command.Parameters.Add ("# ARCHIVE", SqlDbType.Image) .Value = bytes; to save, but it seems to me that before this save the file format is no longer the same as the web system.
On Java Android we do like this
InputStream inputStream = context.openFileInput(filename);
if ( inputStream != null ) {
InputStreamReader inputStreamReader = new InputStreamReader(inputStream);
BufferedReader bufferedReader = new BufferedReader(inputStreamReader);
String receiveString = "";
StringBuilder stringBuilder = new StringBuilder();
while ( (receiveString = bufferedReader.readLine()) != null ) {
stringBuilder.append(receiveString);
}
inputStream.close();
ret = stringBuilder.toString();
}
Related
I am creating Azure function using Java, My requirement I need to copy blob from one container to another container with encryption
so, for encrypting blob I am adding 4bites before and after the blob while uploading to sink container
now, I need to fetch blob content, for this I found one class in azure i.e,
#BlobInput(
name = "InputFileName",
dataType = "binary",
path = sourceContainerName+"/{InputFileName}")
byte[] content,
Here byte[] content, fetching content of blob
but I am facing some errors like, if I pass any file name as InputFileName parameter it is giving 200ok means returning successful. also it is difficult to mefor exception handling
so I am looking for other ways for fetching blob content.... please answer me if any methods or classes we have
If you are looking for more control, instead of using the bindings, you can use the Azure Storage SDK directly. Check out the quickstart doc for getting
setup.
This sample code has full end-to-end code that you could build upon. Here is the code that you are looking for in it for reference
String data = "Hello world!";
InputStream dataStream = new ByteArrayInputStream(data.getBytes(StandardCharsets.UTF_8));
/*
* Create the blob with string (plain text) content.
*/
blobClient.upload(dataStream, data.length());
dataStream.close();
/*
* Download the blob's content to output stream.
*/
int dataSize = (int) blobClient.getProperties().getBlobSize();
ByteArrayOutputStream outputStream = new ByteArrayOutputStream(dataSize);
blobClient.downloadStream(outputStream);
outputStream.close();
I've an input file which comes under ANSI UNIX file format. I convert that file into UTF-8.
Before converting to UTF-8, there is an special character like this in input file
»
After converting to UTF-8, it becomes like this
û
When I process my file as it is, without converting to utf-8, all special characters disappeared and data loss as well.
But when I process my file after converting to UTF-8, All data appears with special character same as am getting after converting to UTF-8 in output file.
ANSI to UTF-8 (could be wrong, please correct me if am wrong somewhere)
FileInputStream = fis = new FileInputStream("inputtextfile.txt");
InputStreamReader isr = new InputStreamReader (fis, "ISO-8859-1");
Reader in = new BufferReader(isr);
FileOutputStream fos = new FileOutputStream("outputfile.txt");
OutPutStreamWriter osw = OutPutStreamWriter("fos", "UTF-8");
Writer out = new BufferedWriter(osw);
int ch;
out.write("\uFEFF";);
while ((ch = in.read()) > -1 ) {
out.write(ch);
}
out.close();
in.close();
After this am processing my file further for final output.
I'm using Talend ETL tool for creating an final output out of generated utf-8. (Java based ETL tool)
What I want is, I want to process my file so that I could get same special characters in output as am getting in input file.
I'm using java 1.8 for this whole processing. I'
'm too stuck in this situation and never dealt this with special characters.
Any suggestion would be helpful.
Is a printstream appropriate for sending image files through a socket? I'm currently doing a homework assignment where I have to write a web proxy from scratch using basic sockets.
When I configure firefox to use my proxy everything works fine except images don't download. If I go to an image file directly firefox comes back with the error: The image cannot be displayed because it contains errors
Here is my code for sending the response from the server back to the client (firefox):
BufferedReader serverResponse = new BufferedReader(new InputStreamReader(webServer.getInputStream()));
String responseLine;
while((responseLine = serverResponse.readLine()) != null)
{
serverOutput.println(responseLine);
}
In the code above serverOutput is a PrintStream object. I am wondering if somehow the PrintStream is corrupting the data?
No, it is never appropriate to treat bytes as text unless you know they are text.
Specifically, the InputStreamReader will try to decode your image (which can be treated as a byte array) to a String. Then your PrintStream will try to encode the String back to a byte array.
There is no guarantee that this will produce the original byte array. You might even get an exception, depending on what encoding Java decides to use, if some of the image bytes aren't valid encoded characters.
I have a PHP SOAP server (using nuSOAP with wsdl) that send the content of a html page. Of course, the HTML can be coded with differents encoding, but this parameter is base64Binary type in XML, and I receive the HTML in the "native encoding" without problems.
In order to prove, I have coded three SOAP clients in: PHP, C# and Java 6 and with the first two I have no problem. The java client was made using WSIMPORT 2.1 and an example of code it's like this:
FileInputStream file = new FileInputStream (new File ("/tmp/chinese.htm"));
BufferedReader buffer = new BufferedReader (new InputStreamReader (file
,"BIG5"));
String line;
String content = "";
while ((line = buffer.readLine()) != null)
content += line+"\n";
FileManagerAPI upload = new FileManagerAPI();
FileManagerAPIPortType servUpload = upload.getFileManagerAPIPort();
BigInteger result = servUpload.apiControllerServiceUploadHTML (
"http://www.test.tmp/因此鳥哥建議您務.html", content.getBytes());
The problem is that before send the HTML in base64 encoding, only the Java client encodes HTML content to UTF8 and, when PHP receives this file, the server manage it like "UTF8 archive", not like a "BIG5 file".
The question is, how to avoid the first UTF8 encoding? or at least do utf-8 encoding after base64, not earlier.
Thanks in advance.
It looks like you need to convert the file from UTF-8 (I think that's the encoding of /tmp/chinese.htm) to BIG5 first.
To convert a file's content, read the file and re-encode it, for example with iconv:
$path = '/tmp/chinese.htm';
$buffer = file_get_contents($path);
$buffer = iconv('UTF-8', 'BIG5', $buffer);
The buffer $buffer is now re-encoded from UTF-8 into BIG5.
This problem seems to happen inconsistently. We are using a java applet to download a file from our site, which we store temporarily on the client's machine.
Here is the code that we are using to save the file:
URL targetUrl = new URL(urlForFile);
InputStream content = (InputStream)targetUrl.getContent();
BufferedInputStream buffered = new BufferedInputStream(content);
File savedFile = File.createTempFile("temp",".dat");
FileOutputStream fos = new FileOutputStream(savedFile);
int letter;
while((letter = buffered.read()) != -1)
fos.write(letter);
fos.close();
Later, I try to access that file by using:
ObjectInputStream keyInStream = new ObjectInputStream(new FileInputStream(savedFile));
Most of the time it works without a problem, but every once in a while we get the error:
java.io.StreamCorruptedException: invalid stream header: 0D0A0D0A
which makes me believe that it isn't saving the file correctly.
I'm guessing that the operations you've done with getContent and BufferedInputStream have treated the file like an ascii file which has converted newlines or carriage returns into carriage return + newline (0x0d0a), which has confused ObjectInputStream (which expects serialized data objects.
If you are using an FTP URL, the transfer may be occurring in ASCII mode.
Try appending ";type=I" to the end of your URL.
Why are you using ObjectInputStream to read it?
As per the javadoc:
An ObjectInputStream deserializes primitive data and objects previously written using an ObjectOutputStream.
Probably the error comes from the fact you didn't write it with ObjectOutputStream.
Try reading it wit FileInputStream only.
Here's a sample for binary ( although not the most efficient way )
Here's another used for text files.
There are 3 big problems in your sample code:
You're not just treating the input as bytes
You're needlessly pulling the entire object into memory at once
You're doing multiple method calls for every single byte read and written -- use the array based read/write!
Here's a redo:
URL targetUrl = new URL(urlForFile);
InputStream is = targetUrl.getInputStream();
File savedFile = File.createTempFile("temp",".dat");
FileOutputStream fos = new FileOutputStream(savedFile);
int count;
byte[] buff = new byte[16 * 1024];
while((count = is.read(buff)) != -1) {
fos.write(buff, 0, count);
}
fos.close();
content.close();
You could also step back from the code and check to see if the file on your client is the same as the file on the server. If you get both files on an XP machine, you should be able to use the FC utility to do a compare (check FC's help if you need to run this as a binary compare as there is a switch for that). If you're on Unix, I don't know the file compare program, but I'm sure there's something.
If the files are identical, then you're looking at a problem with the code that reads the file.
If the files are not identical, focus on the code that writes your file.
Good luck!