Assets + skipBytes performance - java

DataInputStream istream = new DataInputStream(inputstream);
istream.skipBytes(offset);
int value = istream.readInt();
When the inputstream is from getAssets().open("abc") and the file is big (up to 20mb), such simple seek+read takes much time to complete (~250ms on my android phone).
When I first copy the file to getCacheDir().getPath() the same code will take less than 1ms (if you ignore the startup time for copying ~3sec). Plus the app will use more space, once from the copy and once from the asset.
I only read read about 6 values (readInt()) on different locations.
Now to my question, is it possible to improve the performance of the assets skipByte? If yes how? If not is there an alternative, instead of copying the file?

Finally I found a working alternative instead of copying assets.
Example Code
try {
AssetFileDescriptor fd_description = getAssets().openFd("test.raw");
String apk_path = getPackageResourcePath(); //maybe getPackageCodePath() ?
RandomAccessFile file = new RandomAccessFile(apk_path, "r");
file.seek(fd_description.getStartOffset());
String value = file.readLine();
Log.e("RAW ACCESS", "READ:" + value);
file.seek(fd_description.getStartOffset());
value = file.readLine();
Log.e("RAW ACCESS", "READ:" + value);
} catch (IOException exp){
Log.e("RAW ACCESS", "ERROR:"+exp.toString());
}
Some Info
getPackageResourcePath() returns the path to your APK
getAssets().openFd("test.raw") returns the asset informations you need to find the data in the APK
Your asset starts at fd_description.getStartOffset() and ends at fd_description.getStartOffset()+fd_description.getLength()
With if (fd_description.getLength() != fd_description.getDeclaredLength()) you can check if the asset is compressed..
Important
Of course this only works if the asset isn't compressed ! But it's not that hard to disable the compression.

Use a RandomAccessFile if you possibly can. InputStream.skipBytes() only works by reading the file.

Related

Regarding stitching of multiple files into a single file

I work on query latencies and have a requirement where I have several files which contain data. I want to aggregate this data into a single file. I use a naive technique where I open each file and collect all the data in a global file. I do this for all the files but this is time taking. Is there a way in which you can stitch the end of one file to the beginning of another and create a big file containing all the data. I think many people might have faced this problem before. Can anyone kindly help ?
I suppose you are currently doing the opening and appending by hand; otherwise I do not know why it would take a long time to aggregate the data, especially since you describe the amount of files using multiple and several which seem to indicate it's not an enormous number.
Thus, I think you are just looking for a way to automatically to the opening and appending for you. In that case, you can use an approach similar to below. Note this creates the output file or overwrites it if it already exists, then appends the contents of all specified files. If you want to call the method multiple times and append to the same file instead of overwriting an existing file, an alternative is to use a FileWriter instead with true as a second argument to its constructor so it will append to an existing file.
void aggregateFiles(List<String> fileNames, String outputFile) {
PrintWriter writer = null;
try {
writer = new PrintWriter(outputFile);
for(String fileName : fileNames) {
Path path = Paths.get(fileName);
String fileContents = new String(Files.readAllBytes(path));
writer.println(fileContents);
}
} catch(IOException e) {
// Handle IOException
} finally {
if(writer != null) writer.close();
}
}
List<String> files = new ArrayList<>();
files.add("f1.txt");
files.add("someDir/f2.txt");
files.add("f3.txt");
aggregateFiles(files, "output.txt");

Asset Manager has issues with my file(Android)

I have a file called Gate.IC inside my assets in my Android App.
I use this code to measure the length of the file in the assets:
private byte[] Buf = new byte[1024*512];
public int FileLength (String s)
{
int Count = 0;
try {
InputStream s2 = assetManager.open(s);
int tmp = 0;
while ((tmp=s2.read(Buf))>0)
Count+=tmp;
s2.close();
}
catch (IOException e) {
String Message = e.getMessage();
}
return Count;
}
This code works fine for all files except this one.
When it gets to this file, it does open it(and shows the correct file length), but when it reads it I get an IOException and the LogCat says "Error reading asset data" and then "Unable to access asset data: -1"
If I take a different file and change it's name to Gate.IC and don't have the actual Gate.IC file in the assets, it works.
If I change the name of the original Gate.IC into another asset's name, then I get the same error with the "cover" name.
I don't know what it is in this file that it just can't read it.
Here is the Rogue file:
https://dl.dropbox.com/u/8025882/RPG/Gate.IC
you can use this for getting length of the file:
getAssets().openFd( "filename" ).getLength();
I have solved the issue.
Well as I mentioend, it turns out ADT or the Android SDK packaging would compress some fo the assets. My file being my own custom format would be compressed.
Once your file is compressed you cannot read it the same way I did.
There is a program in Android SDK called aapt.exe. It does the packaging of the assets.
All you need to do is call this command with the flag -0 .
Sounds simple, right?
The issue is that eclipse does not let you add flags to this command from within the ADT plugin.
You need to either edit the Android SDK XML build files, or to replace aapt.exe with your own program that calls the original aapt.exe program with the flags you want.
I did the latter.
Here is my devblog entry about it.
http://pompidev.net/2012/10/27/unable-to-access-asset-data-1-and-compressed-assetsandroid/

Java/JAudiotagger: Mp3 wrong ID3 tag size

I'm building an mp3 tagging application using the JAudiotagger library. My application reads the mp3 metadata fine, and can write the metadata fine too, except for the artworks. So my problem is as follows:
When I add a number of artworks in the mp3 file, and save it, the file is getting bigger, which makes sense.
But when I remove one or all the artworks, the file size doesn't get smaller.
The actual problem lies in the ID3v2 tag of my mp3 file. When I remove an artwork, it is actually removed from the tag, but the tag size itself doesn't shrink at all.
The method I'm using when deleting an artwork is this:
// Get the artworkList from the parentFrame.
List<Artwork> list = parentFrame.getArtworkList();
// Get the tag from the parentFrame's mp3File.
AbstractID3v2Tag tag = parentFrame.getTag();
// Get the index of the artwork the user is currently looking at (and
// wants to delete too).
int visibleArtworkIndex = parentFrame.getVisibleArtworkIndex();
// Remove it from the list.
list.remove(visibleArtworkIndex);
// Update the parentFrame's copy of the artworkList.
parentFrame.setArtworkList(list);
// Update the tag (delete its whole artwork field).
tag.deleteArtworkField();
// If the list has more artworks left, add them to the tag.
if (!list.isEmpty()) {
Iterator<Artwork> iterator = list.iterator();
while (iterator.hasNext()) {
try {
tag.addField(iterator.next());
} catch (FieldDataInvalidException e1) {
e1.printStackTrace();
}
}
}
, which actually removes an artwork from the list, and then updates the tag itself by deleting all of its artworks and copying them all over again from the updated list.
My attempts for a solution were:
To create a new tag from the updated old tag (after calling tag.deleteArtworkField()), then adding the artworks to the new tag, but the new tag had the same size as the old one.
To trim the mp3 file just before saving it by using tag.adjustPadding(File fileToBeTrimmed, int sizeToStoreTagBeforeAudioInBytes, long audioStartByte), which adjusts the length of the padding at the beginning of the MP3 file. The problem here is that I know the wrong tag size only and not the correct, so I can't trim the mp3 correctly and I end up losing audio data.
To illustrate the problem better I have included some images:
The mp3 file before:
The mp3 file after the removal of one artwork. Notice the tag kept its previous size although it has less artworks:
And how the file should be:
I hope anyone has any ideas. Thanks in advance.
This is fixed as of today.
By default jaudiotagger does not reclaim space when you make the metadata smaller, but now if you set
TagOptionSingleton.getInstance().setId3v2PaddingWillShorten(true);
before saving changes it will reclaim unnecessary padding to give the minimum file size possible.
That is actually intended behavior, this is a sort of optimization.
When you add data to the ID3v2 tag and there is not enough space, the entire file needs to be rewritten to make enough space. When you remove data, the ID3v2 is just updated to contain the data and unused space is only simply marked as free (it would be recycled when you add more data again).
Look for a "release unused space in tag" call in your library. You need to tell it explicitly that the free space should be released.
Edit: Looking at the Javadoc, I believe you need to set this option before working with your files:
TagOptionSingleton.getInstance().setId3v2PaddingWillShorten(true);
The methods
TagOptionSingleton.getInstance().setId3v2PaddingWillShorten(true);
TagOptionSingleton.getInstance().setOriginalSavedAfterAdjustingID3v2Padding(true);
seem to be not fully implemented (as of Jan 2018). For example, check http://www.jthink.net/jaudiotagger/maven/apidocs/org/jaudiotagger/tag/mp4/Mp4TagCreator.html
to see that the class Mp4TagCreator has not implemented a padding when converting the metadata to raw data:
padding - TODO padding parameter currently ignored
For mp3-files I have a workaround, using the library mp3agic https://github.com/mpatric/mp3agic. Unlike jaudiotagger, which was last updated 2015, it is still updated. On android you need to use version 0.9.0, since 0.9.1 uses java.nio.file-classes, which is not supported by android, https://github.com/mpatric/mp3agic/issues/141.
The workaround is to simply create a new tag and copy the tag data, then write it to a new file. If succesfull, replace the old file with the new file. The new file will be smaller than the original file if you do not copy the cover image. I believe this should be also possible with jaudiotagger, but did not manage to do so. Here is how with mp3agic:
try {
Mp3File song = new Mp3File(location,false);
if (song.hasId3v2Tag()){
ID3v2 oritag=song.getId3v2Tag();
byte[] image=oritag.getAlbumImage();
if(image!=null){
if (image.length > 10) {
song = new Mp3File(location, true);
oritag=song.getId3v2Tag();
ID3v24Tag newtag = new ID3v24Tag();
// copy metadata
newtag.setArtist(oritag.getArtist());
newtag.setArtistUrl(oritag.getArtistUrl());
newtag.setOriginalArtist(oritag.getOriginalArtist());
newtag.setArtistUrl(oritag.getArtistUrl());
newtag.setAlbum(oritag.getAlbum());
newtag.setAlbumArtist(oritag.getAlbumArtist());
newtag.setAudiofileUrl(oritag.getAudiofileUrl());
newtag.setAudioSourceUrl(oritag.getAudioSourceUrl());
newtag.setUrl(oritag.getUrl());
newtag.setGenre(oritag.getGenre());
newtag.setGrouping(oritag.getGrouping());
newtag.setTitle(oritag.getTitle());
newtag.setTrack(oritag.getTrack());
newtag.setPublisher(oritag.getPublisher());
newtag.setPublisherUrl(oritag.getPublisherUrl());
newtag.setCopyright(oritag.getCopyright());
newtag.setCopyrightUrl(oritag.getCopyrightUrl());
newtag.setComposer(oritag.getComposer());
newtag.setCommercialUrl(oritag.getCommercialUrl());
newtag.setComment(oritag.getComment());
newtag.setYear(oritag.getYear());
newtag.setKey(oritag.getKey());
newtag.setRadiostationUrl(oritag.getRadiostationUrl());
newtag.setPaymentUrl(oritag.getPaymentUrl());
song.setId3v2Tag(newtag);
try {
song.save(location + "intermed");
File from = new File(location + "intermed");
// if successfull then replace old file with new file
if(from.exists()) {
File file = new File(location);
long sizeold = file.length();
file.delete();
File to = new File(location);
long sizenew = from.length();
from.renameTo(to);
freedspace += sizeold - sizenew;
}
} catch (IOException | NotSupportedException e) {
e.printStackTrace();
}
}
}
}
} catch (IOException | UnsupportedTagException | InvalidDataException e) {
e.printStackTrace();
}
Remarks: I implemented this in my AudioCleanup-App, https://play.google.com/store/apps/details?id=com.gluege.audiocleanup&hl=en, it works on mp3-files. I did not manage to remove album covers in other file types. If someone has a solution, please share it.
I dislike the Id3-standard, especially the padding. It's a waste of precious space on smartphones. I have seen albums where each song contained the same 1MB cover image.

Glassfish - uploading images - doing it right

I am on latest glassfish (3.1.2) - so no need for apache FileItem and no bugs with getPart(). I read that the best practice on uploading images is saving them on the file system (see here for instance). I am editing already existing code - smelly at that - so I had the idea to do :
Part p1 = request.getPart("file");
System.out.println("!!!!!P1 : " + p1);
Prints :
!!!!!P1 : File name=DSC03660.JPG,
StoreLocation=C:\_\glassfish3\glassfish\domains\domain1\generated\jsp\elkethe\upload_7cb06306_138b413999a__7ffa_00000000.tmp,
size=2589152bytes, isFormField=false, FieldName=file
newlines mine. In the code people are doing :
if (request.getParameter("crop") != null) {
// get path on the server
String outputpath = this.getServletContext().getRealPath(
"images/temp/" + session.getId() + ".jpg");
// store photo
InputStream is = p1.getInputStream();
createPhoto(is, outputpath);
session.setAttribute("photo_path", "images/temp/" + session.getId()
+ ".jpg");
response.sendRedirect("cropping");
return;
}
Where
private void createPhoto(InputStream is, String outputpath) {
FileOutputStream os = null;
try {
os = new FileOutputStream(outputpath);
// write bytes taken from uploaded file to target file
int ch = is.read();
while (ch != -1) {
os.write(ch);
ch = is.read();
}
} catch (Exception ex) {
ex.printStackTrace();
} finally {
Helpers.close(os);
}
}
Now what happens is that the file is uploaded in the StoreLocation (???) on submitting the form so apparently all this p1.getInputStream() is for naught.
My questions are :
what is StoreLocation ? How tmp are those glassfish uploads ? Where are all those parameters set ? I did read BalusC' tutorial - but there is no mention of StoreLocation (google is not very helpful either).
What would be a more professional way of handling the situation - including keeping the photos outside the webroot - but using facilities glassfish provides (if it does provide) ?
Even p1 printing so nice escapes me (it does not seem to Override toString())
Interested in tips even in how should one rename the photos etc (is this sessionID thing Right ? - check also the time trick) :
if (request.getParameter("save") != null) {
long time = System.currentTimeMillis();
String path = "images/upload/" + session.getId() + time + ".jpg";
String outputpath = this.getServletContext().getRealPath(path);
// store photo
InputStream is = p1.getInputStream();
createPhoto(is, outputpath);
// etc
}
Good practice is to pick a path on the filesystem where photos will be uploaded. Often this path is programmed to be configurable via java system property (eg: by passing -Dcom.mycompany.uploadPath=/path/to/photos/dir system property on JVM arguments).
You can also use java system propeties to find environment specific path: user.dir, user.home etc. See System Properties on Java SE Tutorial. Or to use glassfish-relative path, see glassfish system properties.
Once you have reference to Part, it's just about doing file IO to copy the uploaded file into this upload path, eg:
Part part = // obtain part somehow..
String photoFileName = // build a file name somehow..
InputStream photoInputStream = part.getInputStream();
FileOutputStream photoOutputStream = new FileOutputStream(System.getProperty("com.mycompany.uploadPath") + File.separator + photoFileName);
IOUtils.copy(photoInputStream, photoOutputStream);
// close streams here...
Code above uses apache IOUtils for convenience but feel free to write your own copy method. You should also add exception handling method
What is StoreLocation ? How tmp are those glassfish uploads ? Where are all those parameters set ?
StoreLocation is just the the java.io.File object for the FileItem's
data's temporary location on the disk. Resides in javax.servlet.context.tempdir which defaults to %GLASSFISH_HOME%\domains\domain1\generated\jsp\webApp. Those uploads are as tmp as anything (The lifetime of the file is tied to the lifetime of the FileItem instance; the file will be deleted when the instance is garbage collected - from here). Haven't yet managed to change the value of javax.servlet.context.tempdir programmatically (comment please) - it is the tempdir property of the sun-web-app element of the sun-web.xml.
What would be a more professional way of handling the situation - including keeping the photos outside the webroot - but using facilities glassfish provides (if it does provide) ?
Well a more professional way is to Use Part.write() to move the file to the desired location. Due to glassfish implementation though you can't supply an absolute path to write - a chore. I asked here.
As to where to save the file : https://stackoverflow.com/a/18664715/281545
That is for saving the file - to serve it from a location outside the app you need to define "alternatedocroot" properties in the sun-web.xml (or glassfish-web.xml).
Even p1 printing so nice escapes me (it does not seem to Override toString())
Oh yes it does
Interested in tips even in how should one rename the photos etc (is this sessionID thing Right ? - check also the time trick)
No it is not - I tend towards File#createTempFile() - anyway this is a different question asked here

How to estimate zip file size in java before creating it

I am having a requirement wherein i have to create a zip file from a list of available files. The files are of different types like txt,pdf,xml etc.I am using java util classes to do it.
The requirement here is to maintain a maximum file size of 5 mb. I should select the files from list based on timestamp, add the files to zip until the zip file size reaches 5 mb. I should skip the remaining files.
Please let me know if there is a way in java where in i can estimate the zip file size in advance without creating actual file?
Or is there any other approach to handle this
Wrap your ZipOutputStream into a personalized OutputStream, named here YourOutputStream.
The constructor of YourOutputStream will create another ZipOutputStream (zos2) which wraps a new ByteArrayOutputStream (baos)
public YourOutputStream(ZipOutputStream zos, int maxSizeInBytes)
When you want to write a file with YourOutputStream, it will first write it on zos2
public void writeFile(File file) throws ZipFileFullException
public void writeFile(String path) throws ZipFileFullException
etc...
if baos.size() is under maxSizeInBytes
Write the file in zos1
else
close zos1, baos, zos2 an throw an exception. For the exception, I can't think of an already existant one, if there is, use it, else create your own IOException ZipFileFullException.
You need two ZipOutputStream, one to be written on your drive, one to check if your contents is over 5MB.
EDIT : In fact I checked, you can't remove a ZipEntry easily.
http://download.oracle.com/javase/6/docs/api/java/io/ByteArrayOutputStream.html#size()
+1 for Colin Herbert: Add files one by one, either back up the previous step or removing the last file if the archive is to big. I just want to add some details:
Prediction is way too unreliable. E.g. a PDF can contain uncompressed text, and compress down to 30% of the original, or it contains already-compressed text and images, compressing to 80%. You would need to inspect the entire PDF for compressibility, basically having to compress them.
You could try a statistical prediction, but that could reduce the number of failed attempts, but you would still have to implement above recommendation. Go with the simpler implementation first, and see if it's enough.
Alternatively, compress files individually, then pick the files that won't exceedd 5 MB if bound together. If unpacking is automated, too, you could bind the zip files into a single uncompressed zip file.
There is a better option. Create a dummy LengthOutputStream that just counts the written bytes:
public class LengthOutputStream extends OutputStream {
private long length = 0L;
#Override
public void write(int b) throws IOException {
length++;
}
public long getLength() {
return length;
}
}
You can just simply connect the LengthOutputStream to a ZipOutputStream:
public static long sizeOfZippedDirectory(File dir) throws FileNotFoundException, IOException {
try (LengthOutputStream sos = new LengthOutputStream();
ZipOutputStream zos = new ZipOutputStream(sos);) {
... // Add ZIP entries to the stream
return sos.getLength();
}
}
The LengthOutputStream object counts the bytes of the zipped stream but stores nothing, so there is no file size limit. This method gives an accurate size estimation but almost as slow as creating a ZIP file.
I dont think there is any way to estimate the size of zip that will be created because the zips are processed as streams. Also it would not be technically possible to predict the size of the created compressed format unless you actually compress it.
I did this once on a project with known input types. We knew that general speaking our data compressed around 5:1 (it was all text.) So, I'd check the file size and divide by 5...
In this case, the purpose for doing so was to check that files would likely be below a certain size. We only needed a rough estimate.
All that said, I have noticed zip applications like 7zip will create a zip file of a certain size (like a CD) and then split the zip off to a new file once it reaches the limit. You could look at that source code. I have actually used the command line version of that app in code before. They have a library you can use as well. Not sure how well that will integrate with Java though.
For what it is worth, I've also used a library called SharpZipLib. It was very good. I wonder if there is a Java port to it.
Maybe you could add a file each time, until you reach the 5MB limit, and then discard the last file. Like #Gopi, I don't think there is any way to estimate it without actually compressing the file.
Of course, file size will not increase (or maybe a little, because of the zip header?), so at least you have a "worst case" estimation.
just wanted to share how we implemented manual way
int maxSizeForAllFiles = 70000; // Read from property
int sizePerFile = 22000; // Red from property
/**
* Iterate all attachment list to verify if ZIP is required
*/
for (String attachFile : inputAttachmentList) {
File file = new File(attachFile);
totalFileSize += file.length();
/**
* if ZIP required ??? based on the size
*/
if (file.length() >= sizePerFile) {
toBeZipped = true;
logger.info("File: "
+ attachFile
+ " Size: "
+ file.length()
+ " File required to be zipped, MAX allowed per file: "
+ sizePerFile);
break;
}
}
/**
* Check if all attachments put together cross MAX_SIZE_FOR_ALL_FILES
*/
if (totalFileSize >= maxSizeForAllFiles) {
toBeZipped = true;
}
if (toBeZipped) {
// Zip Here iterating all attachments
}

Categories

Resources