is numerous class for specific functions good? - java

my main activity has over 400 Lines Of Code and contains numerous methods which deals with pdf generations, making the page dynamic, and other condition checks..
i was wondering if i should make it all into different class files.. each specific for its task.. thereby creating at least 3-4 different classes..
so my query is
1) will this approach make the app faster..
2) will this increase the app's size drastically??
also is there anyway to reduce the app's size?
i have deleted all unnecessary pics, xml's & assests..
i just want the size to be below 5mb..
Thanks in advance..

1) No. This won't noticably affect the app's speed.
2) No. The compiled code is actually the smallest part of an APK. Most of the size is caused by resources.
A way to reduce your app's size would be to optimize your compression on the images that are contained in your app. Also, you might be able to draw some of the images in code, as primitives such as lines, circles, squares, depending on what's on the images.

These points may help you :
Java class files increase the app size which you can ignore because it is very very less. So that point you no need to be worried.
As you have mentioned you deleted images and all. Those things actually increase the app size. So delete those as many as possible is not being used.
Creating more classes which will be specific for their purposes, that is OOPs concept and it is very much recommended. So if you want to modify something in future iy will be easy for you find the code as well your code will be modified in that particular class made for that sole pupose.
Hope it will help.

1) will this approach make the app faster?
Faster, can't comment. However, That approach will provide you an opportunity to learn how to make good design and will help you in future maintainability and extension of your project.
2) will this increase the app's size drastically?
Not at all.

Related

how to know the performance of part of code using android studio

I want to know the performance of little fragment of my code using android studio.
I am writing a small part of my code here to explain my question
params.rightMargin = (int) (getResources().getDimensionPixelOffset(R.dimen.rightAndLeftMargin));
params.leftMargin = (int) (getResources().getDimensionPixelOffset(R.dimen.rightAndLeftMargin));
alternatively these lines can also be written as:
int margin = getResources().getDimensionPixelOffset(R.dimen.rightAndLeftMargin);
params.rightMargin = margin;
params.leftMargin = margin;
so with the help of android studio IDE how to compare the performance(like memory uses, CPU load, execution time etc.) of these two codes.
NOTE: This is not the only case I have dozens of same cases therefore I would like to have general solution for all types of codes.
With Profiler built-in into Android Studio you can easily see what methods on what threads are being called in a selected time frame. Personally I recommend using a Flame Chart to see what operation takes most amount of time.
Keep in mind that having a profiler attached to your app's process slows it down significantly, so if certain method call took e.g. 1 second, in reality it will be way less.
I don't know android at all. But when writing code, and you have a computation to make, it is better to make it once, and reuse the result. In your example, the latter is better.
I assume that the resources are already in the memory of the process, so probably the footprint here is minor. It will be to create a new method in the stack for getReaources, and another one for getDimensionPixelOffset. Creating a local variable it much cheaper that that.
The footprint increases significantly if you are making IO operations, such as accessing files, or http actions. In those cases it is much better to declare a local variable and reuse it.

how to increase speed of my application?

My applications requirement is to contact the webservice, get the xml, parse it and display it using a listfield. I am calling all this classes xmlhandler, objectmodel, displaying it using a lisfield from a class that extends mainscreen which is making my application slow.
Can anyone suggest me how to make it fast?
Is it a apt to popup a loading screen and start a thread for contacting the webservice, get the xml, parsing it and kill the thread, then populate the listscreen and display it?
suggestions of any kind is welcome!
Test the speed of every part of your program. What I usually use is System.nanoTime() and find the difference in time after every part of the program.
Find out which part is slow before you do anything else.
Otherwise, you'll waste a lot of your time on the wrong parts.
For doing this kind of timing work, I often will do internal logging into a StringBuilder, or maybe just into an ArrayList holding raw, unformatted data. After the test is over, I format and output the data. This minimizes the effect of the logging on the timings.
I can only gues so forgive me if I'm wrong - to me it seems more efficient to create the item of list field only when they're really viewed. So I'd try to keep in memory only the parsed strings and create only the UI items currently to be displayed, discard invisible. To make it more smooth you can you can extend it one or more pages before and after current page.
This way the number of displayed items is always constant. You may also add paging to the service layer to limit number of records trabsmitted at once.

Image caching solutions

Happy Holidays everyone! I know peoples' brain are switched off right now, but can you please give me solutions or best practices to cache images in java or j2me. I am trying to load images from a server (input stream) and I want to be able to store them so I don't have to retrieve them from the server all the time if they need to be displayed again. Thank you.
The approach you'll want probably depends on the number of images as well as their typical file size. For instance, if you're only likely to use a small number of images or small-sized images, the example provided by trashgod makes a lot of sense.
If you're going to be downloading a very large number of images, or images with very large file sizes, you may consider caching the images to disk first. Then, your application could load and later dispose the images as needed to minimize memory usage. This is the kind of approach used by web browsers.
The simplest approach is to use whatever collection is appropriate to your application and funnel all image access though a method that checks the cache first. In this example, all access is via an image's index, so getImage() manipulates a cache of type List<ImageIcon>. A Map<String, ImageIcon> would be a straightforward alternative.
The only way to do this in J2ME is to save the images' raw byte array (i.e. that you pass to Image.createImage()) to somewhere persistent, possibly a file using JSR75 but more likely a record store using RMS.
HTH

How to get performance increases using Processing for Android?

I have converted a few of my Processing sketches into Android apps, but they seem to run really slowly in the emulator and on my device.
Are there any tips on how to increase the speed and performance of my sketch running as an Android app? Are there things or parts of the Processing API I should avoid?
http://developer.android.com/guide/practices/design/performance.html
I hope that helps you.
Facepalm to all answers...Their answers are for programmers that need every single % of the CPU optimized. But I have the same issue as you, not really having much going on in the sketch, but it's still slow. And I doubt BlackDragon even knows what Processing is
My answer:
Try using a different renderer,
P2D for example.
You can use it by putting it next to the sketch size definition:
size(Width,Heigth,P2D);
Or if you don't want to use that function, you can override the sketchRenderer by placing this method.
String sketchRenderer(){
return P2D;
}

Advice on handling large data volumes

So I have a "large" number of "very large" ASCII files of numerical data (gigabytes altogether), and my program will need to process the entirety of it sequentially at least once.
Any advice on storing/loading the data? I've thought of converting the files to binary to make them smaller and for faster loading.
Should I load everything into memory all at once?
If not, is opening what's a good way of loading the data partially?
What are some Java-relevant efficiency tips?
So then what if the processing requires jumping around in the data for multiple files and multiple buffers? Is constant opening and closing of binary files going to become expensive?
I'm a big fan of 'memory mapped i/o', aka 'direct byte buffers'. In Java they are called Mapped Byte Buffers are are part of java.nio. (Basically, this mechanism uses the OS's virtual memory paging system to 'map' your files and present them programmatically as byte buffers. The OS will manage moving the bytes to/from disk and memory auto-magically and very quickly.
I suggest this approach because a) it works for me, and b) it will let you focus on your algorithm and let the JVM, OS and hardware deal with the performance optimization. All to frequently, they know what is best more so than us lowly programmers. ;)
How would you use MBBs in your context? Just create an MBB for each of your files and read them as you see fit. You will only need to store your results. .
BTW: How much data are you dealing with, in GB? If it is more than 3-4GB, then this won't work for you on a 32-bit machine as the MBB implementation is defendant on the addressable memory space by the platform architecture. A 64-bit machine & OS will take you to 1TB or 128TB of mappable data.
If you are thinking about performance, then know Kirk Pepperdine (a somewhat famous Java performance guru.) He is involved with a website, www.JavaPerformanceTuning.com, that has some more MBB details: NIO Performance Tips and other Java performance related things.
You might want to have a look at the entries in the Wide Finder Project (do a google search for "wide finder" java).
The Wide finder involves reading over lots of lines in log files, so look at the Java implementations and see what worked and didn't work there.
You could convert to binary, but then you have 1+ something copies of the data, if you need to keep the original around.
It may be practical to build some kind of index on top of your original ascii data, so that if you need to go through the data again you can do it faster in subsequent times.
To answer your questions in order:
Should I load everything into memory all at once?
Not if don't have to. for some files, you may be able to, but if you're just processing sequentially, just do some kind of buffered read through the things one by one, storing whatever you need along the way.
If not, is opening what's a good way of loading the data partially?
BufferedReaders/etc is simplest, although you could look deeper into FileChannel/etc to use memorymapped I/O to go through windows of the data at a time.
What are some Java-relevant efficiency tips?
That really depends on what you're doing with the data itself!
Without any additional insight into what kind of processing is going on, here are some general thoughts from when I have done similar work.
Write a prototype of your application (maybe even "one to throw away") that performs some arbitrary operation on your data set. See how fast it goes. If the simplest, most naive thing you can think of is acceptably fast, no worries!
If the naive approach does not work, consider pre-processing the data so that subsequent runs will run in an acceptable length of time. You mention having to "jump around" in the data set quite a bit. Is there any way to pre-process that out? Or, one pre-processing step can be to generate even more data - index data - that provides byte-accurate location information about critical, necessary sections of your data set. Then, your main processing run can utilize this information to jump straight to the necessary data.
So, to summarize, my approach would be to try something simple right now and see what the performance looks like. Maybe it will be fine. Otherwise, look into processing the data in multiple steps, saving the most expensive operations for infrequent pre-processing.
Don't "load everything into memory". Just perform file accesses and let the operating system's disk page cache decide when you get to actually pull things directly out of memory.
This depends a lot on the data in the file. Big mainframes have been doing sequential data processing for a long time but they don't normally use random access for the data. They just pull it in a line at a time and process that much before continuing.
For random access it is often best to build objects with caching wrappers which know where in the file the data they need to construct is. When needed they read that data in and construct themselves. This way when memory is tight you can just start killing stuff off without worrying too much about not being able to get it back later.
You really haven't given us enough info to help you. Do you need to load each file in its entiretly in order to process it? Or can you process it line by line?
Loading an entire file at a time is likely to result in poor performance even for files that aren't terribly large. Your best bet is to define a buffer size that works for you and read/process the data a buffer at a time.
I've found Informatica to be an exceptionally useful data processing tool. The good news is that the more recent versions even allow Java transformations. If you're dealing with terabytes of data, it might be time to pony up for the best-of-breed ETL tools.
I'm assuming you want to do something with the results of the processing here, like store it somewhere.
If your numerical data is regularly sampled and you need to do random access consider to store them in a quadtree.
I recommend strongly leveraging Regular Expressions and looking into the "new" IO nio package for faster input. Then it should go as quickly as you can realistically expect Gigabytes of data to go.
If at all possible, get the data into a database. Then you can leverage all the indexing, caching, memory pinning, and other functionality available to you there.
If you need to access the data more than once, load it into a database. Most databases have some sort of bulk loading utility. If the data can all fit in memory, and you don't need to keep it around or access it that often, you can probably write something simple in Perl or your favorite scripting language.

Categories

Resources