I've read this post which was very near my question and I still didn't found what I was looking for.
I'm developing an application that relies on two plain-text files: let's say weekdays.txt and year.txt. One file has most likely (yet to define) seven lines, so it's very small (few bytes), but the other will contain 365 lines (one per each day of the year), which is not very big in bytes (20 Kb tops, my guess), but requires more processing power.
The app is not yet done, so I'll try to be explicit:
So my application will get the current date and the time and will look on weekdays.txt for the line that corresponds to the current day of the week, parse that line's information and store it in memory.
After that the program should read year.txt and look for the line that corresponds to the current date and parse (and store in memory) that line's info.
Then it should do print out all the stored info.
When I say 'parse the info' I mean parsing Strings, something as simple as:
the string "7*1234-568" should be read as:
String ID=7;
int postCode=1234;
int areaCode=568;
The goal here is to create a light (and offline, this is crucial) application for quick use.
As you can see, this is a Developing 101 level application, and my question is: do you think this is too heavy work for any mobile phone? The reason I'm asking this is because I want my app to be functional in the biggest number of today's cellphones possible.
By the way, do you think for this kind of work I should instead be working with a database? I heard people around the forum talking of RMS and some said that it's kind of limited, so I just stayed the same. Anyway the idea of the txt files was to be easiest for the user to update just in case it's necessary...
Thanks in advance!
If your config files are read-only and are not going to change with time, then you could include them inside the jar. You should be able to read them using Class.getResourceAsStream that returns an InputStream. An ASCII file with 366 lines (remember leap years) and 80 cols is around 29KB, so even 10 years old phones will read it without major problems (remember to perform IO in a separate thread though).
If the configuration could change, then you'll probably want to create a WS and have the phones fetch the config over the internet. To provide offline capabilities you could sync with the remote DB periodically and store the info in the device. RMS is record-based, and has a max size (device-dependent), but I think it is ok for your case. The drawback of this approach is that at least a first synchronization should be made, thus phones without a data plan will be left out.
Since one of your requirement is to do it offline, I'd recommend using the RMS. I am not that confident in using files in j2me for such important data (not sure if it's better now) since it can be prone to errors and file corruptions.
If the amount of data you're going to save is as you say, 7 lines for weeks and 365 for years, then no problems with RMS.
Good luck!
Related
This question already has answers here:
How much faster is the memory usually than the disk?
(6 answers)
Closed 2 years ago.
I have a big database on .txt file, I'd like to know what's faster, reading the data from the file in every time I want to access to it or loading all the data into variables when the program starts so I can access to it from the variable it self.
PS: perhaps it's important to know that I code in java, but it's more a general question.
Assume for a second you have perfect recall, but you can choose to forget.
Now, what is faster?
Read the book remembering it all, then recall from memory as you need information.
Don't read the book. When you need some information, skim the book, looking only for the small piece of information you need and forgetting the rest. When you need more information, skim the book again, but you don't even know where in the book the information is, since you remember none of it, so every time you need to start reading the book from the beginning.
Obviously, #1 is way, WAY faster. Sure it requires your brain to be able to remember it all, but performance-wise, there is no comparison at all.
Exception: If you only ever need one piece of information, #2 will be faster, since you can stop reading as soon as you find the information you need, i.e. you don't have to read the whole book.
Short answer
Variable
Long answer
Reading a file is quite a slow operation. It involves accessing your disk, which is noticeably slower than accessing a variable you already have in memory. And beware that when you read the file you'll need to store it somewhere in memory, so you'll pay as well the time for accessing memory
You can play around with some examples reading a file and measuring how long did it took. Remember to run the code several times so you get a more accurate result
Other point you should consider is the usage you're doing of your DB. If you're just storing a couple of values, it's fine to go for a txt file. But as soon as your storage layer gets more complex you'll probably need a proper DB (e.g. MySQL, DynamoDB, Mongo)
I want to preserve data during service restart, which uses a arraylist of {arraylist of integers} and some other variables.
Since it is about 40-60 MB, I don't want it be generated each time the service restarts(it takes a lot of time); I want to generate data once, and maybe copy it for next service restart.
How can it be done?
Please consider how will I go about putting a data structure similar to multidimensional array(3d or above) into file, before suggesting writing the data in a file; which when done, will likely take significant time to read too.
You can try writing your data after generation to a file. Then on next service restart, you can simply read that from the file.
If you need persistent data, then put it into database
https://developer.android.com/guide/topics/data/data-storage
or try some object database like http://objectbox.io/
So you're afraid reading from the file would take along time due to its size, the number and size of the rows (the inner arrays).
I think it might be worthy to stop for a minute and ask yourself whether you need all this data at once. Maybe you only need a portion of it at any given time and there are scenarios in which you don't use some (or maybe most) of the data? If this is likely, I would suggest that you'll compute the data on demand, when required, and only keep a memory based cache for future demand in the current session.
Otherwise, if you do need all the data at a given time, you have a trade-off here. Trade-off between size on disk and processing time. You can shrink the data using some algorithm, but it would be at the expense of the processing time. On the hand, you can just serialize your object of data and save it to disk as is. Less time, more disk space.
Another solution for your scenario, could be, to just use a DB and a cursor (room on top sqlite). I don't exactly know what it is that you're trying to do, but your arrays can easily be modeled into a DB. Model a single row as you'd like and add to that model the outer index of the array. Then save the models into the DB, potentially making the outer index field the primary key if the DB.
Regardless of the things I wrote, try to think if you really need this data persistent on your client, maybe you can store it at the server side? If so, there are other storage and access solutions which are not included at the Android client side.
Thank you all for answering this question.
This is what I have finally settled for:
Instead of using the structure as part of the app, I made this into a
tool, which will prepare data to be used with the main app. In doing
so, it also stopped the concern regarding service restart.
This tool will first read all the strings from input file(s).
Then put all of them into the structure one at a time.(This will be
the part which I was having doubts, and asked the question about.
Since all the data is into the structure here, as soon as program
terminates, this structured data is unusable.)
Now, I prepared another structure for putting this data into file,
and put all this data into file so that I do not need to read to all
input file again and again, but only few lines.
Then I thought, why spend time "read"ing files while I can hard code
it into my app. So, as final step of this preprocessing tool, I made
it into a class which has switch(input){case X: return Y}.
Now I will just have to put this class into the app I wanted to make.
I know this all sounds very abstract, even stretching the concept of abstract, if you want to know details, please let me know. I am also including link of my "tool". Please visit and let me know if there would have been some better way.
P.S. There could be errors in this tool yet, which if you find, let me know to fix them.
P.P.S.
link: Kompressor Tool
We have market data handlers which publish quotes to KDB Ticker Plant. We use exxeleron q java libary for this purpose. Unfortunately latency is quite high: hundreds milliseconds when we try to insert a batch of records. May you suggest some latency tips for KDB + Java binding, as we need to publish quite fast.
There's not enough information in this message to give a fully qualified response, but having done the same with Java+KDB it really comes down to eliminating the possibilities. This is common sense, really, nothing super technical.
make sure you're inserting asynchronously
Verify it's exxeleron q java that is causing the latency. I don't think there's 100's of millis overhead there.
Verify the CPU that your tickerplant is on isn't overloaded. Consider re-nicing, core binding, etc
Analyse your network latencies. Also, if you're using Linux, there's a few tcp tweaks you can try, e.g. TCP_QUICKACK
As you're using Java, be smarter about garbage collection. It's highly configurable, although not directly controllable.
if you find out the tickerplant is the source of latency, you could either recode it to not write to disk - or get a faster local disk.
There's so many more suggestions, but the question is a bit too ambiguous.
EDIT
Back in 2007, with old(ish) servers and a very old version of KDB+ we were managing an insertion rate of 90k rows per second using the vanilla c.java. That was after many rounds of the above points. I'm sure you can achieve way more now, it's a matter of finding where the bottlenecks are and fixing them one by one.
Make sure the data publish to ticket plant are is batch, like wait for a little bit to insert say few rows of data in batch, but not insert row by row once any new records coming
I am writing a multiplayer game where I allow players to attack each other.
Only the attacker must be logged in.
I need to know how many attacks player did in last 6 hours, and I need to know if the defender was attacked during last 1 hour. I don't care about attacks done more than 6 hours ago. Is there any way to implement it better than storing these data in database and deleting "expired" data (older than 6 hours)?
Server is written in java, clients will be Android.
Any ideas / tutorial links or even keywords are appreciated. Also, if you think there is no better solution, please say so :)
For GAE, there is no real alternative, no. Besides using the datastore, GAE offers a memcache system, which, in fact, is designated for storing temporary data. However, as stated in the documentation, "Values can expire from the memcache at any time, and may be expired prior to the expiration deadline set for the value." Therefore, it would be the best to use the datastore.
Due to (quite annoying) limitations on many J2ME phones, audio files cannot be played until they are fully downloaded. So, in order to play live streams, I'm forced to download chunks at a time, and construct ByteArrayInputStreams, which I then feed to Players.
This works well, except that there's an annoying gap of about 1/4 of a second every time a stream ends and a new one is needed. Is there any way to solve this problem, or the problem above?
The only good way to play long (3 minutes and more) tracks with J2ME JSR135, moderately reliably, on the largest number of handsets out there, is to use a "file://" url when you create the player, or to have the inputstream actually come from a FileConnection.
recent blackberry phones can use a ByteArrayInputstream only when they have a large java heap memory available.
a lot of phones running on the Symbian operating system will allow you to put files in a private area for the J2ME application while still being able to play tracks in the same location.
Unfortunately you can't get rid of these gaps, at least not on any device I've tried it on. It's very annoying indeed. It's part of the spec that you can't stream audio or video over HTTP.
If you want to stream from a server, the only way to do it is to use an RTSP server instead though you'll need to check support for this on your device.
And faking RTSP using a local server on the device (rtsp://localhost...) doesn't work either.. I tried that too.
EDIT2: Or you could just look at this which seems to be exactly what you want: http://java.sun.com/javame/reference/apis/jsr135/javax/microedition/media/protocol/DataSource.html
I would create two Player classes and make sure that I had received enough chunks before I started playing them. Then I would start playing the first chunk through player one and load the second one into player two. Then I would use the TimeBase class to keep track of how much time has passed and when I knew the first chunk would end (you should know how long each chunk has to play) then I would start playing the second chunk through the second player and load the third chunk into the first and so on and so forth until there are no more chunks to play.
The key here is using the TimeBase class properly to know when to make the transition. I think that that should get rid of the annoying 1/4 second gap bet between chunks. I hope that works, let me know if it does because it sounds really interesting.
EDIT: Player.prefetch() could also be useful here in reducing latency.