Database on client server android - java

I am developing a simple client server application in Android, although I have a doubt. I will save the data from app at server, and the app will need access them. I would like to know what's the best approach to manage data; Using two databases, one in the server and another in the smartphone (It will need synchronization), or only one at server (All requests will have to go to server) ?
Thanks in advance.

Well, frankly speaking, it depends on the kind of data you are handling.
If your data is totally dynamic and varies at short intervals of time, you should always be querying the server.
On the other hand, if your data is relatively static and remains same over longer periods of time, it would be wise to cache the data in your local database, instead of querying it again and again from the server.
Another thing you have to keep in mind is the size of the data. If the data is really very large and you are storing it in your local database, then you need to clear the old data after a certain interval to make sure that your app is not eating up your memory.

It depends on your needs. Manage very large databases in the app is not the appropriate way (because the app will be very large after a while). You should store all of your data on the server and use SQLite in the app for cache only, but do not cache everything, just the data that don't change often.

Related

tables in Ram or DB tables, for best performances

I am programming a server application (chat server side) with java, which receive requests and send to a target.
I have several tables in the database, and when the server
application starts, I programmed it to copy all the content of the
database to Map tables (in the ram) in order to speed up the pull
push data while the application running.
Dose this correct way? Or you suggest me to pull data from the database directly when I need a detail. and remove the Map<>
tables from the ram!?
I suffer from memory leak.
Does dealing with the database slows the application?
Whether caching all or some of the data in memory makes sense depends on your use case. It can improve performance but it adds complexity which might not be needed.
You can load millions or even billions of records into a JVM, but much more then this you need an off heap storage such as a data store for this purpose or a database. Using off heap memory, you can have trillions of records in a JVM but this is rarely needed.

Is there a Java local queue library I can use that keeps memory usage low by dumping to the hard drive?

This maybe not possible but I thought I might just give it a try. I have some work that process some data, it makes 3 decisions with each data it proceses: keep, discard or modify/reprocess(because its unsure to keep/discard). This generates a very large amount of data because the reprocess may break the data into many different parts.
My initial method was to send it to my executionservice that was processing the data but because the number of items to process was large I would run out of memory very quickly. Then I decided to maybe offload the queue off to a messaging server(rabbitmq) which works fine but now I'm bound by network IO. What I like about rabbitmq is it keeps messages in memory up to a certain level and then dumps old messages to the local drive so if I have 8 gigs of memory on my server I can still have a 100 gig message queue.
So my question is, is there any library that has a similar feature in Java? Something that I can use as a nonblocking queue that keeps only X items in queue(either by number of items or size) and writes the rest to the local drive.
note: Right now I'm only asking for this to be used on one server. In the future I might add more servers but because each server is self-generating data I would try to take messages from one queue and push them to another if one server's queue is empty. The library would not need to have network access but I would need to access the queue from another Java process. I know this is a long shot but thought if anyone knew it would be SO.
Not sure if it id the approach you are looking for, but why not using a lightweight database like hsqldb and a persistence layer like hibernate? You can have your messages in memory, then commit to db to save on disk, and later query them, with a convenient SQL query.
Actually, as Cuevas wrote, HSQLDB could be a solution. If you use the "cached table" provided, you can specify the maximum amount of memory used, exceeding data will be sent to the hard drive.
Use the filesystem. It's old-school, yet so many engineers get bitten with libraries because they are lazy. True that HSQLDB provides lots of value-add features, but in the context of being light weight....

How to store user preferences in an application where users only communicate via SMS messages?

I'm developing an application in java to be hosted in google app engine where i need to store user preferences(object with 10 String variables) untill he is logged in. I will be using this data frequently(once in two minutes per user).Should i use data store with memcache or are there any other scalable way? I'm expecting very high traffic(may be 50000 people at a time).Is it ok to store 50000 objects in memcache at a time?
It's an sms application where users interact only through SMS. So i cannot use session for storing data as my traffic is routed through SMS gateway.
Is it O.K. to store 50,000 objects in memcache? Sure, but as others have correctly noted, it's a cache, so you're going end up storing data in the datastore, too.
Stepping back, one of the big reasons to use memcache in an interactive application is for a better user experience. You want to minimize response time by caching things might introduce a noticeable delay if you had to pull them out of slower storage.
Is that even worth doing for an SMS app? Compared to the delays in the SMS infrastructure, I'd bet that the time difference between memcache and datastore is barely measurable.
I'd keep your app as simple as possible until it works, then look for optimization opportunities. I think you'll find them more along the lines of making sure that properties you're going to use in queries are unindexed, so that your writes will be faster.
Store it in the session. Memcache is a cache and shouldn't be used directly for business logic.

What is the most efficient way to store analytics beacons?

Similar to how google analytics sends beacons from javascript that track events, what are the most efficient ways to collect that beacon data and return back to the client in the fastest time?
For example, if I have a server to server beacon call I want to make that call as fast as possible on the clients server.
PHP to a flat files?
PHP to a local queue?
Java Server that logs to a queue and maintains a connection the remote queue the whole time?
custom c++ server?
This would be on the order of 1000 requests per second.
There are 2 aspects to this.
1) the client's beacon call should be done as quickly as possible. This means the incoming HTTP request should respond 200 OK and exit as soon as possible, so it probably shouldn't do the actual data writing itself. It should hand that off to another process in the background, either by a background shell execution or by utilizing a queue/job mechanism like Gearman.
2) The data writing itself, if done in a background thread away from the client's attention, has a little more time luxury. 1000 writes per second should be fine for a modern hardware well tuned database with row locking that's not being SELECTed from too heavily at the same instant. Perhaps, though, this could be a good usage scenario for a key-value store for the immediate data storage. Then a separate analysis/reporting process could query the key-value store off-line for all stored data, process it, and eventually copy it into a database.

How to improve my software project's speed?

I'm doing a school software project with my class mates in Java.
We store the info on a remote db.
When we start the application we pull all the information from the database and transform it into objects to use in our application (using java sql statemens).
In the application we edit some of these objects and then when we exit the application
we save or update information in the database using Hibernate.
As you see we dont use Hibernate for pulling in information, we use it just for saving and updating.
We have 2, but very similar problems.
The loading of object(when we start the app) and the saving of objects(with Hibernate) in the db(when closing the app) is taking too much time.
And our project its not a huge enterprise application, its a quite small app, we just manage some students, teachers, homeworks and tests. So our db is also very very small.
How could we increase performance ?
later edit: if we use a local database it runs very quick, it just runs slow on remote databases
Are you saying you are loading the entire database into memory and then manipulating it? If that is the case, why don't you instead simply use the database as a storage device, and do lookups and manipulation as necessary (using Hibernate if you like, or something else if you don't)? The key there is to make sure that you are using connection pooling, as that will reduce the connection time.
If this is what you are doing, then you could be running into memory issues as well - first, by not caching the entire database in memory, you will reduce memory and will spread out the network load from the beginning/end to the times when it needs to happen.
These 2 sentences are red flags for me :
When we start the application we pull
all the information from the database
and transform it into objects to use
in our application (using java sql
statemens). In the application we edit
some of these objects and then when we
exit the application we save or update
information in the database using
Hibernate.
Is there a requirements reason that you are loading all the information from the database into memory at startup, or why you're waiting until shutdown to save changes back in the database?
If not, I'd suggest a design change. If you've already got Hibernate mappings for the tables in the DB, I'd use Hibernate for both all of your CRUD (create, read, update, delete) operations. And, I'd only load the data that each page in your app needs, as it needs it.
If you can't make that kind of design change at this point, I think you've got to look closely at how you're managing the database connections. Are you using connection pools? Are you opening up multiple connections? Forgetting to release them?
Something else to look at. How are you using Hibernate to save the entities to the db? Are you doing a getHibernateTemplate().get on each one and then doing an entity.save or entity.update on each one? If so, that means you are also causing Hibernate to run a select query for each database object before it does a save or update. So, essentially, you'd be loading each database object twice (once at the beginning of the program, once before saving). To see if that's what's happening, you can turn on the show_sql property or use P6Spy to see exactly what queries Hibernate is running.
For what you are doing, you may very well be better off serializing your objects and writing them out to a flat file.
But, much more likely, you should just read / update objects directly from your database as needed instead of all at once, for all the reasons aperkins gives.
Also, consider what happens if your application crashes? If all of your updates are saved only in memory until the application is closed, everything would be lost if the app closes unexpectedly.
The difference in loading everything from a remote DB server versus loading everything from a local DB server is the network latency / pipe size. The network is a much smaller pipe than anything else. Two questions: first, how much data are we really talking about? Second, what is your network speed? 10/100/1000? Figure between 10 and 20% of your pipe size is going to be overhead due to everything from networking protocols to the actual queries themselves.
As others have stated, the way you've architected is usually high on the list of "don't do". When starting, pull only enough data to initialize the app. As the user works through it, pull what you need for that task.
The ONLY time you pull everything is when they are working in a disconnected state. In that case, you still don't load everything as objects in the application, you just work from a local data store which gets sync'ed with the remote server every so often.
The project its pretty much complete. we cant do large refactoring on it now.
I tried to use a second level cache for Hibernate when saving. EhCacheProvider.
in hibernate.xml:
net.sf.ehcache.hibernate.EhCacheProvider
i have done a config for the cache, ehcache.xml:
i have put the cache.jar in the project build path
and i have set the hibernate property for every class and set in the mapping.
But this cache doesn't seem to have an effect. I dont know if it works(if it is used).
Try minimising number of SQL queries, since every query has its own overhead.
You can enable database compression, which should speed things up when there is a lot of data.
Maybe you are connecting to the database many times?
Check the ping time of remote database server - it might be the problem.
As your application is just slow when running on a remote database server, I'd assume that the performance loss is due to:
Connecting to the server: try to reuse connections (pass the instance around) or use connection pooling
Query round-trip time: use as few queries as possible, see here in case of a hand-written DAL:
Preferred way of retrieving row with multiple relating rows
For hibernate you may use its batch functionality and adjust hibernate.batch_size.
In all cases, especially when you can't refactor larger parts of the codebase, use a profiler (method time or sql queries) to find the bottleneck. I bet you'll find thousands of queries, each taking 10ms RTT) which could be merged into one.
Some other things you can look into:
You can allocate more memory to the JVM
Use the jconsole tool to investigate what the bottlenecks are.
Why dont you have two separate threads?
Thread 1 will load your objects one by one.
Thread 2 will process objects as they are loaded.
Your app will seem more interactive at startup.
It never hurts to review the basics:
Improving speed means reducing time (obviously), and to do that, you find activities that take significant time but can be eliminated or replaced with something that uses less time. What I mean by activity is almost always a function call, method call, or property call, performed on a specific line of code for a specific purpose. If may invoke I/O or it may invoke computation, or both. If its purpose is not essential, then it can be optimized.
Many people use profilers to try to find these time-wasting lines of code, but most profilers miss the target because they look at functions, not lines, they go to sleep during I/O, and they worry about "self time".
Many more people try to guess what could be the problem, or they ask others to guess, such as by asking on SO. Such guesses, in the nature of guesses, are sometimes right - more often not, but people still invest time and resources in them.
There's a very simple way to find out for sure, without guessing, what could fruitfully be optimized, and here is one way to do it in Java.
Thanks for your answers. Their were more than helpful.
We completely solved this problem like so:
Refactored the LOAD code. Now it uses Hibernate with Lazy Fetching.
Refactored the SAVE code. Now it saves, just the data that was modified and right after the time it was modified. This way we dont have a HUGE save an the end.
Im amazed of how good it all went. The amount of new code we had to write was very very small.

Categories

Resources