I want to excrypt a file in java very basically. Simply read line by line the file, and change the value of the chars to "char += key", where key is an integer.
The problem is that if I use a key larger or equal with 2, it doesn't work anymore.
public void encryptData(int key) {
System.out.println("Encrypt");
try {
BufferedReader br = new BufferedReader(new FileReader("encrypted.data"));
BufferedWriter out = new BufferedWriter(new FileWriter("temp_encrypted.data"));
String str;
while ((str = br.readLine()) != null) {
char[] str_array = str.toCharArray();
// Encrypt one line
for (int i = 0; i < str.length(); i++) {
str_array[i] += key;
}
// Put the line in temp file
str = String.valueOf(str_array);
out.write(str_array);
}
br.close();
out.close();
} catch (IOException e) {
System.out.println(e.getMessage());
}
}
The decrypt function is the same but with the input/output files interchanged and instead of adding the key value, I subtract it.
I check char by char and indeed, the header gets messed up when i use a key value > 1. Any ideas? Is it because of maximum value of the char being exceeded?
You're basically implementing a general-purpose Caesar cipher.
Adding a number to a character could change a character to newline, etc which will not work if using a BufferedReader to read it back in.
Best to manipulate the text as a byte stream which would correctly encode and decode newline and any non-ASCII characters.
public void encryptData(int key) {
System.out.println("Encrypt");
try {
BufferedInputStream in = new BufferedInputStream(new FileInputStream("raw-text.data"));
BufferedOutputStream out = new BufferedOutputStream(new FileOutputStream("temp_encrypted.data"));
int ch;
while((ch = in.read()) != -1) {
// NOTE: write(int) method casts int to byte
out.write(ch + key);
}
out.close();
in.close();
} catch (IOException e) {
System.out.println(e.getMessage());
}
}
public void decryptData(int key) {
System.out.println("Decrypt");
try {
BufferedInputStream in = new BufferedInputStream(new FileInputStream("temp_encrypted.data"));
BufferedOutputStream out = new BufferedOutputStream(new FileOutputStream("decrypted.data"));
int ch;
while((ch = in.read()) != -1) {
out.write(ch - key);
}
out.close();
in.close();
} catch (IOException e) {
System.out.println(e.getMessage());
}
}
Related
Question at the bottom
I'm using netty to transfer a file to another server.
I limit my file-chunks to 1024*64 bytes (64KB) because of the WebSocket protocol. The following method is a local example what will happen to the file:
public static void rechunck(File file1, File file2) {
FileInputStream is = null;
FileOutputStream os = null;
try {
byte[] buf = new byte[1024*64];
is = new FileInputStream(file1);
os = new FileOutputStream(file2);
while(is.read(buf) > 0) {
os.write(buf);
}
} catch (IOException e) {
Controller.handleException(Thread.currentThread(), e);
} finally {
try {
if(is != null && os != null) {
is.close();
os.close();
}
} catch (IOException e) {
Controller.handleException(Thread.currentThread(), e);
}
}
}
The file is loaded by the InputStream into a ByteBuffer and directly written to the OutputStream.
The content of the file cannot change while this process.
To get the md5-hashes of the file I've wrote the following method:
public static String checksum(File file) {
InputStream is = null;
try {
is = new FileInputStream(file);
MessageDigest digest = MessageDigest.getInstance("MD5");
byte[] buffer = new byte[8192];
int read = 0;
while((read = is.read(buffer)) > 0) {
digest.update(buffer, 0, read);
}
return new BigInteger(1, digest.digest()).toString(16);
} catch(IOException | NoSuchAlgorithmException e) {
Controller.handleException(Thread.currentThread(), e);
} finally {
try {
is.close();
} catch(IOException e) {
Controller.handleException(Thread.currentThread(), e);
}
}
return null;
}
So: just in theory it should return the same hash, shouldn't it? The problem is that it returns two different hashes that do not differ with every run.. file size stays the same and the content either.
When I run the method once for in: file-1, out: file-2 and again with in: file-2 and out: file-3 the hashes of file-2 and file-3 are the same! This means the method will properly change the file every time the same way.
1. 58a4a9fbe349a9e0af172f9cf3e6050a
2. 7b3f343fa1b8c4e1160add4c48322373
3. 7b3f343fa1b8c4e1160add4c48322373
Here is a little test that compares all buffers if they are equivalent. Test is positive. So there aren't any differences.
File file1 = new File("controller/templates/Example.zip");
File file2 = new File("controller/templates2/Example.zip");
try {
byte[] buf1 = new byte[1024*64];
byte[] buf2 = new byte[1024*64];
FileInputStream is1 = new FileInputStream(file1);
FileInputStream is2 = new FileInputStream(file2);
boolean run = true;
while(run) {
int read1 = is1.read(buf1), read2 = is2.read(buf2);
String result1 = Arrays.toString(buf1), result2 = Arrays.toString(buf2);
boolean test = result1.equals(result2);
System.out.println("1: " + result1);
System.out.println("2: " + result2);
System.out.println("--- TEST RESULT: " + test + " ----------------------------------------------------");
if(!(read1 > 0 && read2 > 0) || !test) run = false;
}
} catch (IOException e) {
e.printStackTrace();
}
Question: Can you help me chunking the file without changing the hash?
while(is.read(buf) > 0) {
os.write(buf);
}
The read() method with the array argument will return the number of files read from the stream. When the file doesn't end exactly as a multiple of the byte array length, this return value will be smaller than the byte array length because you reached the file end.
However your os.write(buf); call will write the whole byte array to the stream, including the remaining bytes after the file end. This means the written file gets bigger in the end, therefore the hash changed.
Interestingly you didn't make the mistake when you updated the message digest:
while((read = is.read(buffer)) > 0) {
digest.update(buffer, 0, read);
}
You just have to do the same when you "rechunk" your files.
Your rechunk method has a bug in it. Since you have a fixed buffer in there, your file is split into ByteArray-parts. but the last part of the file can be smaller than the buffer, which is why you write too many bytes in the new file. and that's why you do not have the same checksum anymore. the error can be fixed like this:
public static void rechunck(File file1, File file2) {
FileInputStream is = null;
FileOutputStream os = null;
try {
byte[] buf = new byte[1024*64];
is = new FileInputStream(file1);
os = new FileOutputStream(file2);
int length;
while((length = is.read(buf)) > 0) {
os.write(buf, 0, length);
}
} catch (IOException e) {
Controller.handleException(Thread.currentThread(), e);
} finally {
try {
if(is != null)
is.close();
if(os != null)
os.close();
} catch (IOException e) {
Controller.handleException(Thread.currentThread(), e);
}
}
}
Due to the length variable, the write method knows that until byte x of the byte array, only the file is off, then there are still old bytes in it that no longer belong to the file.
I'm attempting to read in every character (tabs, new lines) in a text file. I'm having some trouble reading all of these in. My current method reads the tabs in but not new lines. Here is the code:
//reads each character in as an integer value returns an arraylist with each value
public static ArrayList<Integer> readFile(String file) {
FileReader fr = null;
ArrayList<Integer> chars = new ArrayList<Integer>(); //to be returned containing all commands in the file
try {
fr = new FileReader(file);
BufferedReader br = new BufferedReader(fr);
int tempChar = ' ';
String tempLine = "";
while ((tempLine = br.readLine()) != null) {
for (int i = 0; i < tempLine.length(); i++) {
int tempIntValue = tempLine.charAt(i);
chars.add(tempIntValue);
}
}
fr.close();
br.close();
} catch (FileNotFoundException e) {
System.out.println("Missing file");
System.exit(0);
} catch (IOException e) {
System.out.println("Empty file");
System.exit(0);
}
return chars;
}
I originally used the read() method instead of readLine() but that had the same problem. I'm representing the char as ints. Any help is really appreciated!
I suggest you use try-with-resources, List and the diamond operator <> and that you read each char with the BufferedReader.read() method.
public static List<Integer> readFile(String file) {
List<Integer> chars = new ArrayList<>();
try (FileReader fr = new FileReader(file);
BufferedReader br = new BufferedReader(fr);) {
int ch;
while ((ch = br.read()) != -1) {
chars.add(ch);
}
} catch (FileNotFoundException e) {
System.out.println("Missing file");
System.exit(0);
} catch (IOException e) {
System.out.println("Empty file");
System.exit(0);
}
return chars;
}
The reason you aren't getting line endings is documented by the BufferedReader.readLine() Javadoc which says in part (emphasis added),
A String containing the contents of the line, not including any line-termination characters...
How to read an entire record from a txt file, get each field separately and convert each field into a separate character stream. Then write the character streams of individual characters (in a loop) to a plain ASCII output text file.
I have my class definition, I just cannot seem to write the output file properly which has to be one individual plain ascii text character at a time. I just need a little help. Here is what I have so far:
----- This is my first question guys. Sorry if it isn't formatted well :( I'm trying to covert a file of objects to a plain ASCII character text file which i called "yankees.txt" I read it in with the ObjectInputStream then I'm supposed to get each field separately and convert each field into a seperate character stream, and write the characters one character at a time from each field to my "yankees.txt"
public class yankeesfilemain {
public static void main(String[] args) throws EOFException {
ObjectInputStream is;
OutputStream os;
yankees y;
int i, j, k;
String name, pos;
int number;
File fout;
try {
is = new ObjectInputStream(new
FileInputStream("yankees.yanks"));
y = (yankees)is.readObject();
fout = new File("yankees.txt");
os = new FileOutputStream(fout);
while (y != null) {
name = y.getname();
pos = y.getpos();
number = y.getnum();
for (i = 0; i < .length(); i++) {}
for (j = 0; j < .length(); j++) {
pos = y.getpos();
}
for (k = 0; k < .length(); k++) {
number = y.getnum();
}
break;
}
os.close();
is.close();
} catch(EOFException eof) {
eof.printStackTrace();
System.exit(0);
} catch(NullPointerException npe) {
npe.printStackTrace();
System.exit(0);
} catch(NumberFormatException nfe) {
nfe.printStackTrace();
System.exit(0);
} catch(IOException e) {
e.printStackTrace();
System.exit(0);
}
}
}
Please refer to the following code
public static void main(String[] args) throws IOException {
InputStream in = new FileInputStream("C:\\11.txt");
OutputStream out = new FileOutputStream("C:\\12.txt", true);
try {
byte[] buffer = new byte[1024];
while (true) {
int byteRead = in.read(buffer);
if (byteRead == -1)
break;
out.write(buffer, 0, byteRead);
}
}
catch (MalformedURLException ex) {
System.err.println(args[0] + " is not a URL Java understands.");
} finally {
if (in != null)
in.close();
if (out != null) {
out.close();
}
}
}
The functions I'm using to convert the file to a string and then to an mdf are below. I'm outputting the file paths and file names to make sure everything is cool. Is there anything I'm not considering that could change the file's (a video mp4 actually) fingerprint? I'm checking it against md5sum on ubuntu.
private static String readFileToString(String filePath)
throws java.io.IOException{
StringBuffer fileData = new StringBuffer(1000);
BufferedReader reader = new BufferedReader(
new FileReader(filePath));
char[] buf = new char[1024];
int numRead=0;
while((numRead=reader.read(buf)) != -1){
String readData = String.valueOf(buf, 0, numRead);
fileData.append(readData);
buf = new char[1024];
}
reader.close();
System.out.println(fileData.toString());
return fileData.toString();
}
public static String getMD5EncryptedString(String encTarget){
MessageDigest mdEnc = null;
try {
mdEnc = MessageDigest.getInstance("MD5");
} catch (NoSuchAlgorithmException e) {
System.out.println("Exception while encrypting to md5");
e.printStackTrace();
} // Encryption algorithm
mdEnc.update(encTarget.getBytes(), 0, encTarget.length());
String md5 = new BigInteger(1, mdEnc.digest()).toString(16) ;
return md5;
}
String isn't a container for binary data. Lose the two conversions between byte array and String. You should be reading the file as bytes and computing the MD5 directly in the bytes. You can do that while you're reading it: you don't need to read the entire file first.
And MD5 isn't an encryption: it's a secure hash.
Found this answer here: How to generate an MD5 checksum for a file in Android?
public static String fileToMD5(String filePath) {
InputStream inputStream = null;
try {
inputStream = new FileInputStream(filePath);
byte[] buffer = new byte[1024];
MessageDigest digest = MessageDigest.getInstance("MD5");
int numRead = 0;
while (numRead != -1) {
numRead = inputStream.read(buffer);
if (numRead > 0)
digest.update(buffer, 0, numRead);
}
byte [] md5Bytes = digest.digest();
return convertHashToString(md5Bytes);
} catch (Exception e) {
return null;
} finally {
if (inputStream != null) {
try {
inputStream.close();
} catch (Exception e) { }
}
}
}
private static String convertHashToString(byte[] md5Bytes) {
String returnVal = "";
for (int i = 0; i < md5Bytes.length; i++) {
returnVal += Integer.toString(( md5Bytes[i] & 0xff ) + 0x100, 16).substring(1);
}
return returnVal;
}
I want to read file line by line.
BufferedReader is much faster than RandomAccessFile or BufferedInputStream.
But the problem is that I don't know how many bytes I read.
How to know bytes read(offset)?
I tried.
String buffer;
int offset = 0;
while ((buffer = br.readLine()) != null)
offset += buffer.getBytes().length + 1; // 1 is for line separator
I works if file is small.
But, when the file becomes large, offset becomes smaller than actual value.
How can I get offset?
There is no simple way to do this with BufferedReader because of two effects: Character endcoding and line endings. On Windows, the line ending is \r\n which is two bytes. On Unix, the line separator is a single byte. BufferedReader will handle both cases without you noticing, so after readLine(), you won't know how many bytes were skipped.
Also buffer.getBytes() only returns the correct result when your default encoding and the encoding of the data in the file accidentally happens to be the same. When using byte[] <-> String conversion of any kind, you should always specify exactly which encoding should be used.
You also can't use a counting InputStream because the buffered readers read data in large chunks. So after reading the first line with, say, 5 bytes, the counter in the inner InputStream would return 4096 because the reader always reads that many bytes into its internal buffer.
You can have a look at NIO for this. You can use a low level ByteBuffer to keep track of the offset and wrap that in a CharBuffer to convert the input into lines.
Here's something that should work. It assumes UTF-8, but you can easily change that.
import java.io.*;
class main {
public static void main(final String[] args) throws Exception {
ByteCountingLineReader r = new ByteCountingLineReader(new ByteArrayInputStream(toUtf8("Hello\r\nWorld\n")));
String line = null;
do {
long count = r.byteCount();
line = r.readLine();
System.out.println("Line at byte " + count + ": " + line);
} while (line != null);
r.close();
}
static class ByteCountingLineReader implements Closeable {
InputStream in;
long _byteCount;
int bufferedByte = -1;
boolean ended;
// in should be a buffered stream!
ByteCountingLineReader(InputStream in) {
this.in = in;
}
ByteCountingLineReader(File f) throws IOException {
in = new BufferedInputStream(new FileInputStream(f), 65536);
}
String readLine() throws IOException {
ByteArrayOutputStream baos = new ByteArrayOutputStream();
if (ended) return null;
while (true) {
int c = read();
if (ended && baos.size() == 0) return null;
if (ended || c == '\n') break;
if (c == '\r') {
c = read();
if (c != '\n' && !ended)
bufferedByte = c;
break;
}
baos.write(c);
}
return fromUtf8(baos.toByteArray());
}
int read() throws IOException {
if (bufferedByte >= 0) {
int b = bufferedByte;
bufferedByte = -1;
return b;
}
int c = in.read();
if (c < 0) ended = true; else ++_byteCount;
return c;
}
long byteCount() {
return bufferedByte >= 0 ? _byteCount - 1 : _byteCount;
}
public void close() throws IOException {
if (in != null) try {
in.close();
} finally {
in = null;
}
}
boolean ended() {
return ended;
}
}
static byte[] toUtf8(String s) {
try {
return s.getBytes("UTF-8");
} catch (Exception __e) {
throw rethrow(__e);
}
}
static String fromUtf8(byte[] bytes) {
try {
return new String(bytes, "UTF-8");
} catch (Exception __e) {
throw rethrow(__e);
}
}
static RuntimeException rethrow(Throwable t) {
throw t instanceof RuntimeException ? (RuntimeException) t : new RuntimeException(t);
}
}
Try use RandomAccessFile
RandomAccessFile raf = new RandomAccessFile(filePath, "r");
while ((cur_line = raf.readLine()) != null){
System.out.println(curr_line);
// get offset
long rowIndex = raf.getFilePointer();
}
to seek by offset do:
raf.seek(offset);
I am wondering your final solution, however, I think using long type instead of int can meet the most situation in your code above.
If you want to read a file line by line, I would recommend this code:
import java.io.*;
class FileRead
{
public static void main(String args[])
{
try{
// Open the file that is the first
// command line parameter
FileInputStream fstream = new FileInputStream("textfile.txt");
// Use DataInputStream to read binary NOT text.
BufferedReader br = new BufferedReader(new InputStreamReader(fstream));
String strLine;
//Read File Line By Line
while ((strLine = br.readLine()) != null) {
// Print the content on the console
System.out.println (strLine);
}
//Close the input stream
in.close();
}catch (Exception e){//Catch exception if any
System.err.println("Error: " + e.getMessage());
}
}
}
I always used that method in the past, and works great!
Source: Here