Java best way to multihread save/load large amounts of data [closed] - java

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions concerning problems with code you've written must describe the specific problem — and include valid code to reproduce it — in the question itself. See SSCCE.org for guidance.
Closed 9 years ago.
Improve this question
For a game I am writing, the world is saved as chunks. Each chunk (when saved) is just under 200kb (they are very large chunks). Whenever a world is loaded 121 chunks need to be loaded. Each one only takes a fraction of a second, but all those fractions add up and lead to taking several seconds.
This would be ok, but saving is even more important. When a player walks into new chunks, all the chunks out of range will be saved and unloaded. As each save takes a fraction of a second, I would get a lag spike of over a second every time the player moves chunks. For this reason, I hope to use Threads to save and load chunks so that a chunk can be saved/loaded while the game is still running.
I have no idea how I would implement such a thing though. So, if anyone could share a link to a tutorial or give some source code I could play with, that would be great!
Thanks!

I would use memory mapped files and I would load as much as possible in as few files as possible (each files adds an overhead)
If you do this you can load/save GB in a fraction of a second.

Related

Java list expand strategy [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 6 years ago.
Improve this question
for example, in an ArrayList, each item is very big, and the size of the list may be large enough to exceed the size of memory. What is the strategy to expand a list in this situation?
Thanks for all the replies. I have encountered such a problem that receiving a list of object by remote calling, and each object in the list may be quite large while the size of the list may be 10000 or more. I wonder how to store this list into memory during the execution.
List<BigItem> list = queryService.queryForList(params...);
Your question is very generic, but I think it is possible to give a certain "fact based" answer nonetheless:
If your setup is as such that memory becomes a bottleneck; then your application needs to be aware about that fact. In other words: you need to implement measurements within your application.
You have to enable your application to make the decision if "growing" a list (and "loading" those expensive objects for example) is possible, or not.
A simple starting point is described here; but of course, this is really a complicated undertaking. Your code has to constantly monitor its memory usage; and take appropriate steps if you get closer to your limits.
Alternatively, you should to profiling to really understand the memory consumption "behavior" of your application. There is no point in a putting effort into "self-controlling" ... if your application happens to have various memory leaks for example. Or if your code is generating "garbage" on a rate that makes the garbage collector spin constantly.
You see, a lot of aspects come into play here. You should focus on them one by one. Start with understanding your application; then decide if you have to improve its "garbage collection" behavior; or if you have go down the full nine yards and make your application manage itself!

cost a huge time because the number is huge? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
I have a program which involves with a bunch of huge numbers (I have to put them into bignumbers type). The time complexity is unexpectedly huhge too. So, I was wondering, do these two factors have a connections? Any comments are greatly appreciated.
Do they have a connection to each other? Probably not.
You can have a large complexity algorithm working on small numbers (such as calculating the set of all sets for ten thousand numbers all in the range 0..30000) and you can have very efficient algorithms working on large numbers (such as simply adding up ten thousand BigInteger variables).
However, they'll both probably have a cascading effect on the time it takes your program to run. Large numbers will add a bit, a high-complexity algorithm will add a bit more I say 'add' but the effect is likely to be multiplicative, much worse - for example, using an inefficient algorithm may may your code take 30% longer, and the use of BigInteger may add 30% to that, giving you a 69% overall hit:
t * 1.3 * 1.3 = 1.69t
Sorry for the general answer but, without more specifics in the question, a general answer is the best you'll probably get. In any case, I believe (or at least hope) it answers the question you asked.

Shrink an image to 1/4 of its original size. [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 8 years ago.
Improve this question
I am a programming newbie. I am asked to shrink an image to 1/4 its original size. And tutor told me I can replace 4 pixel with one pixel, so as to make it 1/4. How Can I do replacing work in java. Can anyone give me an example?
If you are new to programming then this is absolutely a lesson you should not be undertaking because it involves file IO, loops, data structures, and math. None of these are relevant to the basics you should be learning now.
The basic algorithm would be that you read the pixels in the image into a matrix, and every 2x2 square of pixels could be replaces by one pixel by averaging the colors.
I am not going to give you a full answer because it would involve lots of API lookups to create a fully functional application to do this. There would be a lot of code you probably wouldn't even understand if I showed it to you.
If this is for school, you are either far behind in your studies, or else your teacher is giving an unecessarily complex early assignment. Regardless of where this is coming from, you need to ask for a simpler assignment.
Either way, I recommend you take a step back and solve some simpler problems first so that you understand the components necessary to solve this problem.

Clustering: Finding a Average Reading [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
I am looking into finding algorithm within the area of clustering or machine learning which will facilitate or creating a typical data reading for a group of readings. The issue is that it must facilitate time series data; thus some traditional (k-means) techniques are not as useful.
Can anyone recommend places to look or particular algorithms that would provide a typical reading and relatively simple to implement (in Java), manipulate and understand?
As an idea. Try to convert all data types into time, then you will have vectors of the same type (time), then any clustering strategy will work fine.
By converting to time I actually mean that any measurement or data type we know about has a time in its nature. Time is not a 4-th dimension, as many think! Time is actually 0-dimension. Even a point of no physical dimensions which may not exist in space, exists in time.
Distance, weight, temperature, pressure, directions, speed... all measures we do can be converted into certain functions of time.
I have tried this approach on several projects and it payed back with really nice solutions.
Hope, this might help you here as well.
For most machine learning problems in Java, weka usually works pretty well.
See, for example: http://facweb.cs.depaul.edu/mobasher/classes/ect584/weka/k-means.html

Consistency in cassandra [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking for code must demonstrate a minimal understanding of the problem being solved. Include attempted solutions, why they didn't work, and the expected results. See also: Stack Overflow question checklist
Closed 9 years ago.
Improve this question
Does in AP of the CAP theorm, is there a possibility (like in cassandra), that if i write/update to cassandra and immediately try to fetch it, can there a chance the data is not found or should my read o/p be paused before being fetched (hence allowing replications to settle in).
Can someone direct me to any link where people have addressed the consistency issue in cassandra.
Cassandra can be used to give the consistency that you describe. If the number of nodes you read from (R) plus the number of nodes you write to (W) is greater than the replication factor (N), you will read back a value immediately after it was written (assuming there are no concurrent writers who may write a later value in the small window since you wrote). So as long as R+W>N you will get this behaviour.
A common way to do this is to read and write at CL.QUORUM, since this gives you good availability. You could also e.g. read at CL.ONE and write at CL.ALL, but then writes will fail if a single node is down.

Categories

Resources