I am doing a cryptography experiment with One Time Pads.
I have two files OTP and TEXT (1K-10K) with bytes in them. OTP is large (>1GB). I want to create a third file CYPHERTEXT (same size as TEXT) by performing modular addition of TEXT with OTP using an offset into OTP. I coded this by hand using java.io, and it works, but isn't very snappy, even with buffered IO (streams or writers).
I was looking for a way to add one of underlying byte buffers together with the other one using NIO but could not find a (built-in) way to do that, or to filter the contents of TEXT using the data from OTP except by hand. Is there any way to do something like this without reinventing the wheel? I thought I could use a selector. Ideally I'd like to be able to handle files larger than 2GB in size for both the OTP and the TEXT which is why I was looking at NIO.
private static void createOTP() {
...
System.out.print("Generating " + filename + " ");
long startTime = System.nanoTime();
FileOutputStream fos = new FileOutputStream(f);
BufferedOutputStream bos = new BufferedOutputStream(fos, MB);
for(long currentSize =0; currentSize < OTPSize; currentSize += baSize){
new SecureRandom().nextBytes(ba);
bos.write(ba);
if(currentSize % (MB * 20L * (long)sizeInGB)==0){
System.out.print(".");
}
}
long elapsedTime = System.nanoTime() - startTime;
System.out.println(" OTP generation elapsed Time is " + (elapsedTime / 1000000.0) + " msec");
fos.close();
bos.close();
...
}
private static void symetricEncryptionDecryption(boolean encrypt) {
...
outtext=new File(intext.getParentFile(), direction + ".OTP");
BufferedOutputStream bos = new BufferedOutputStream(new FileOutputStream(outtext), MB);
byte[] plaindata = new byte[(int)intext.length()];
DataInputStream dataIs = new DataInputStream(new FileInputStream(intext));
dataIs.readFully(plaindata);
dataIs.close();
ByteBuffer bb = ByteBuffer.wrap(plaindata);
DataInputStream in = new DataInputStream(new BufferedInputStream(new FileInputStream(otpFile)));
in.skip(offset);
while(bb.hasRemaining()){
bos.write( bb.get() + (encrypt? in.readByte() : -in.readByte()) );
}
bos.close();
in.close();
System.out.println("Offset: " + offset);
}
So is there a far slicker way to do this:
while(bb.hasRemaining()){
bos.write( bb.get() + (encrypt? in.readByte() : -in.readByte()) );
}
Or to generate the OTP for that matter.
It's not clear what you are trying to do but if you memory map the OTP file, giving you random access, and you read/process 8 bytes at a time i.e. long values you should eb able to write an encrypted 10K text file under 100 ms where most that time will be spend starting the JVM.
BTW: if you have access to the encrypted TEXT and OTP file you might decode the text without he offset i.e. you could work it out using brute force.
Related
I have requirement where I need to write multiple input streams to a temp file in java. I have the below code snippet for the logic. Is there a better way to do this in an efficient manner?
final String tempZipFileName = "log" + "_" + System.currentTimeMillis();
File tempFile = File.createTempFile(tempZipFileName, "zip");
final FileOutputStream oswriter = new FileOutputStream(tempFile);
for (final InputStream inputStream : readerSuppliers) {
byte[] buffer = new byte[102400];
int bytesRead = 0;
while ((bytesRead = inputStream.read(buffer)) > 0) {
oswriter.write(buffer, 0, bytesRead);
}
buffer = null;
oswriter.write(System.getProperty("line.separator").getBytes());
inputStream.close();
}
I have multiple files of size ranging from 45 to 400 mb, for a typical 45mb and 360 mb files this method is taking around 3 mins on average. Can this be further improved?
You could try a BufferedInputStream
As #StephenC replied is it unrelevant in this case to use a BufferedInputStream because the buffer is big enough.
I reproduced the behaviour on my computer (with an SSD drive). I took a 100MB file.
It took 110ms to create the new file with this example.
With an InputStreamBuffer and an OutputStream = 120 ms.
With an InputStream and an OutputStreamBuffer = 120 ms.
With an InputStreamBuffer and an
OutputStreamBuffer = 110 ms.
I don't have a so long execution time as your's.
Maybe the problem comes from your readerSuppliers ?
I have a very simple file copy in Java. It is copies the DB file about 6 minutes to my external (usb3) HDD.
//First database:
try {
fileInputStream = new FileInputStream(selectedfile);
bufferedInputStream = new BufferedInputStream(fileInputStream);
outputFile = new File("" + chosenDestination);
fileOutputStream = new FileOutputStream(outputFile);
bufferedOutputStream = new BufferedOutputStream(fileOutputStream);
size = selectedfile.length();
byte[] buffer = new byte[1024 * 6];
dbLabel.setText("Copying Database...");
while ((data = bufferedInputStream.read(buffer)) > 0) {
bufferedOutputStream.write(buffer, 0, data);
total += data;
String percent = "" + total / (size / 100);
pbar.setValue(Integer.valueOf(percent));
sizeLabel.setText("(" + total / (1024 * 1024) + " / " + (size / (1024 * 1024)) + " MB) ");
}
} finally {
bufferedOutputStream.close();
fileInputStream.close();
fileOutputStream.close();
bufferedInputStream.close();
}
Yesterday I bought a Transcend TS-PDU3 (PCI-E) USB additon card. So my computer and my HDD are able to USB3 too. But when I tried the copyjob, it is copy the file with same speed. It is a Linux server so don't need driver (lspci see it) and I think everything work good so I think the error is in the java code. What buffer size I need to choose for USB3. Is the 6 * 1024 is a small buffer size. Or I need to search the error in elsewhere? Thanks.
Provided you use Java 7, you have a much easier way to copy data from one file to another. With NIO, you can do that:
final Path src = Paths.get(selectedFile);
final Path dst = Paths.get(outputFile);
Files.copy(src, dst);
The JVM does a pretty good job at that. If you don't see an improvement from your current code, well... Replace your code with this first, and investigate further. As you said, no specific drivers are needed in theory.
my approach is encrypting mp3 files that even after encryption the encrypted file could be played with any mp3 player ( its ok if what you hear is a trash ! only play ) . so i'm going to splitting my Mp3 files to byte arrays and then change the frames ( according to Mp3 File structure ) with my encryption method i use .
here is the code i use for getting bytes :
public class Audio {
public static void main(String[] args) throws FileNotFoundException, IOException {
File file = new File("test.wmv");
FileInputStream fis = new FileInputStream(file);
//System.out.println(file.exists() + "!!");
//InputStream in = resource.openStream();
ByteArrayOutputStream bos = new ByteArrayOutputStream();
byte[] buf = new byte[1024];
try {
for (int readNum; (readNum = fis.read(buf)) != -1;) {
bos.write(buf, 0, readNum); //no doubt here is 0
//Writes len bytes from the specified byte array starting at offset off to this byte array output stream.
System.out.println("read " + readNum + " bytes,");
}
} catch (IOException ex) {
// Logger.getLogger(genJpeg.class.getName()).log(Level.SEVERE, null, ex);
}
byte[] bytes = bos.toByteArray();
byte temp ;
for(int i = 0 ; i < bytes.length ; i++)
{
System.out.println(bytes[i]);
}
//below is the different part
File someFile = new File("test2.wmv");
FileOutputStream fos = new FileOutputStream(someFile);
fos.write(bytes);
fos.flush();
fos.close();
}
Here is the thing , my encryption and changing bytes should be in frame parts right ? as far as i read we cant access to bits and the smallest part of a file we can change are bytes . so how can i change the frames that are defined by bits ?
I did "google it" ! and i'm pretty confused , If any one could show me the way i'll be thankfull .
Read full bytes, extract the bits, e.g. for the last 3 bits use b & ((1 << 3) - 1), for the first 3 bits use b & (((1 << 3) - 1) << (Byte.SIZE - 3)). Does not work for full bytes of course, unless you convert them to integer first and convert them back to bytes before use.
Since Java 7 you can also simply wrap a byte array in a BitSet for easy access to individual bits.
To encrypt bits, use a stream cipher mode like AES-CTR instead of a block cipher mode. The last step of a stream cipher is just a XOR which is perfect for encrypting bits. It does not require any padding either, though it does require a unique IV (per data encryption key). A hash over the filename could provide that, given that the name never changes of course.
I tried to display the size of the data already sent by the OutputStreamWriter but seems the write method is kind of like asynchronized which means if the file is 60M and the upload rate is 200K/s, the output only displays one line of "Data sent: 61210K" (or whatever large number ) instead of what's supposed to be (a small number per second)
Did I miss something?
code piece:
Writer writer = new OutputStreamWriter(out, POST_ENCODING);
char[] buf = new char[1024];
int read = 0;
long bytes = 0;
while ((read = reader.read(buf)) >= 0) {
bytes += read;
if (System.currentTimeMillis() - lastMsgTimeStamp > 1000) {
lastMsgTimeStamp = System.currentTimeMillis();
System.out.println("Data sent: " + (bytes / 1024) + " K");
}
writer.write(buf, 0, read);
}
writer.flush();
Assuming reader is InputStreamReader, you are tracking how many characters being read/sent, not the number of bytes. If you are not actually working with the text in between reading/writing then you could use the standard InputStream/OutputStream to monitor the byte count.
Also, lastMsgTimeStamp has no declaration, is it being set to System.currentTimeMillis() somewhere outside your example?
In either case, OutputSteamWriter, along with the rest of the java.io.* package, is blocking. So when write is called your thread stops until it completes. However there is some buffering going on, by the StreamEncoder in OutputStreamWriter and also by the implementation of OutputStream returned by URLConnection. Neither buffer such large amounts of data, so your problem is likely somewhere else.
I have a binary file (about 100 MB) that I need to read in quickly. In C++ I could just load the file into a char pointer and march through it by incrementing the pointer. This of course would be very fast.
Is there a comparably fast way to do this in Java?
If you use a memory mapped file or regular buffer you will be able to read the data as fast your hardware allows.
File tmp = File.createTempFile("deleteme", "bin");
tmp.deleteOnExit();
int size = 1024 * 1024 * 1024;
long start0 = System.nanoTime();
FileChannel fc0 = new FileOutputStream(tmp).getChannel();
ByteBuffer bb = ByteBuffer.allocateDirect(32 * 1024).order(ByteOrder.nativeOrder());
for (int i = 0; i < size; i += bb.capacity()) {
fc0.write(bb);
bb.clear();
}
long time0 = System.nanoTime() - start0;
System.out.printf("Took %.3f ms to write %,d MB using ByteBuffer%n", time0 / 1e6, size / 1024 / 1024);
long start = System.nanoTime();
FileChannel fc = new FileInputStream(tmp).getChannel();
MappedByteBuffer buffer = fc.map(FileChannel.MapMode.READ_ONLY, 0, size);
LongBuffer longBuffer = buffer.order(ByteOrder.nativeOrder()).asLongBuffer();
long total = 0; // used to prevent a micro-optimisation.
while (longBuffer.remaining() > 0)
total += longBuffer.get();
fc.close();
long time = System.nanoTime() - start;
System.out.printf("Took %.3f ms to read %,d MB MemoryMappedFile%n", time / 1e6, size / 1024 / 1024);
long start2 = System.nanoTime();
FileChannel fc2 = new FileInputStream(tmp).getChannel();
bb.clear();
while (fc2.read(bb) > 0) {
while (bb.remaining() > 0)
total += bb.get();
bb.clear();
}
fc2.close();
long time2 = System.nanoTime() - start2;
System.out.printf("Took %.3f ms to read %,d MB File via NIO%n", time2 / 1e6, size / 1024 / 1024);
prints
Took 305.243 ms to write 1,024 MB using ByteBuffer
Took 286.404 ms to read 1,024 MB MemoryMappedFile
Took 155.598 ms to read 1,024 MB File via NIO
This is for a file 10x larger than what you want. Its this fast because the data is being cached in memory (and I have an SSD drive). If you have fast hardware, the data can be read pretty fast.
Sure, you could use a memory mapped file.
Here are two good links with sample code:
Thinking in Java: Memory-mapped files
Java Tips: How to create a memory-mapped file
If you don't want to go this route, just use an ordinary InputStream (such as a DataInputStream after wrapping it in a BufferedInputStream.
Most files will not need memory mapping but can simply be read by the standard Java I/O, especially since your file is so small. A reasonable way to read said files is by using a BufferedInputStream.
InputStream in = new BufferedInputStream(new FileInputStream("somefile.ext"));
Buffering is already optimized in Java for most computers. If you had a larger file, say 100MB, then you would look at optimizing it further.
Reading the file from the disk is going to be the slowest part by miles, so it's likely to make no difference whatsoever. Of this individual operation, of course- the JVM still takes a decade to start up, so add that time in.
Take a look at this blog post here on how to read a binary file into a byte array in Java:
http://www.spartanjava.com/2008/read-a-file-into-a-byte-array/
Copied from link:
File file = new File("/somepath/myfile.ext");
FileInputStream is = new FileInputStream(file);
// Get the size of the file
long length = file.length();
if (length > Integer.MAX_VALUE) {
throw new IOException("The file is too big");
}
// Create the byte array to hold the data
byte[] bytes = new byte[(int)length];
// Read in the bytes
int offset = 0;
int numRead = 0;
while (offset < bytes.length
&& (numRead=is.read(bytes, offset, bytes.length-offset)) >= 0) {
offset += numRead;
}
// Ensure all the bytes have been read in
if (offset < bytes.length) {
throw new IOException("The file was not completely read: "+file.getName());
}
// Close the input stream, all file contents are in the bytes variable
is.close()
Using the DataInputStream of the Java SDK can be helpful here. DataInputStream provide such functions as readByte() or readChar(), if that's what needed.
A simple example can be:
DataInputStream dis = new DataInputStream(new FileInputStream("file.dat"));
try {
while(true) {
byte b = dis.readByte();
//Do something with the byte
}
} catch (EOFException eofe) {
//Stream Ended
} catch (IOException ioe) {
//Input exception
}
Hope it helps. You can, of course, read the entire stream to a byte array and iterate through it as well...