Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I am working on a distributed system project. I am required to create a program that allow multiple users to edit on the same text file concurrently. I have been looking around online for a relatively simple solution but I haven't found one. I've read about BlockingQueue but that doesn't make much sense to me. I have talked to my TA and he suggested that each client will have a copy of the text file, which will they edit. Those sub-files will then be merged to the main copy. However, the problem is that I won't be able to update those sub-files while they are editing the text file.
As I understand it you want an online text editor with which you can modify files concurrently and the updates should happen as real-time as possible.
Here is what I would do:
If a user opens a file he receives a copy of it and the user is added to the list of users which have opened this file.
After a user makes a change, wait X seconds to accumulate further changes and then send them to the server.
The server processes the change requests for a file one after the other (different files can be done in parallel of course and it can also be done more intelligently by splitting files into chunks which can be processed independently in parallel too, at least on the server side [this is only partially true, two changes can be processed in parallel if the intersection of the set of affected chunks in change A and change B is empty])
A change request is either acceppted and all the changes are broadcastest to all user that have the file opened or the change is refused. This can be pretty complicated. The easiest way is to keep track with a version number and refuse all changes that come from older versions. (If you have a version number for each chunk and the size of the chunks is small, you will only run into rejections if two or more people are working at the almost same location in a document at the same time. But it will be quite some work, consider you will have to split/merge/delete/insert chunks if they become too big or small.)
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 days ago.
Improve this question
I'm building a javafx program that allows users to keep track of invoices for services rendered and I've come accross a question that I am struggling to answer on my own.
Now the program stores data in a mySQL server which is shared among all users of the javafx program. As of now (for simplicities sake) the program gathers the data once from the database upon launch and then allows the user to modify the data in the server. This implementation leads to a lot of problems...
Lets say user1 and user2 log into the javafx program at the same time (thus guaranteeing that they have the same data from the database). Suppose that both go on to modify some data and at the end of the day they log off.
The following day user1 and user2 would log back on and find that a lot of data was modifiied because the databaase will only return the most recent change from either user. That is if they both happened to work on the same invoice form then the user who updated the database last would have their data saved and the opposing user would be confused as to why the form has changed from their previous input.
This problem made me recall my days in high school where I marvelled at how google docs essentially resolved this multi user issue. I did a bit of light research on how google docs keeps track of changes among multiple users but i couldn't come up with a solution based on differential synchronization (not to mention most of it flew over my head anyway).
I tried coming up with my own solution - perhaps having the mySQL data refresh every 30 seconds or so but this kind of implementation also has its own set of problems and doesn't quite resolve the issue.
I've seen some other software that doesn't allow multiple users to modify the same invoice form at the same time (that is two users could create and modify two different invoices at the same time but both would not be allowed to modify the same invoice at the same time). This implementation could work but I'm still iffy about implementing in this way and wanted to see if there was another (possibly better/more elegant approach) to this problem, hence the following quesion.
What are some standard methods/implementations of providing multi-user access to form/data based software?
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
I have been working on AWS Lambda function with some custom Java codes. It's codes had to get long execution time be required but Lambda has execution timeout as 900 seconds of maximum. So I intended that to be saved memory state of process to S3 as file before the timeout and then load that file to be executed from S3 on next execution time.
How to save all state of process to file and then load to execute that from saved process state?
Your JRE won't support such a risky feature but if you're looking to run extraneous tasks I would not suggest saving process states anyway. If you can add some code and details you'll get a more precise solution to your problem, however some basic pointers...
Make sure the functions you're processing data with can be dynamically paused and started
Write functions to save/load data from a file in any format (json, csv, etc)
Write a function to identify when your dp task is complete
Hard code a limit to load, process, then write in that order
Batch the process in series until you're notified that it's complete
Again this question is really ambiguous so my answer may not be at all what you need. In either case what you want to save is data, not processes. In theory the computer itself is capable of saving the states of all registers, the stack, and program counter in assembly but that's a pretty big no no for a lot of reasons that aren't really part of this discussion
Sorry for my ambiguous question to you!
I knew how a application basically save it's data to file and restore that next runtime.
But my work about the question is wrapping Lambda handler to be included ETL migration job application that has to take time over Lambda maximum timeout.
I was already developed Lambda handler building framework being able to include ETL job and deploy the handler to lambda function but I didn't solve that problem of Lambda timeout.
Meanwhile, I thought about like 'jmap' tool. I just guessed that if 'jmap' can dump heap memory, even it could be restored from dumped file.
As collectively thinking, There isn't a way to solve that problem with my guess.
So I could like better to make application data store architecture in ETL job.
Because of ETL job program is built by other ETL solution, I feel some inconvenience.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I'm trying to write a program in Java for personal accounting. My initial plan was for user to log in, and the program would look him up in a text file and let him in. Then there would be a JTable which would load all his transactions (from a txt) and show them. He would then add new ones or edit/delete ones already there. The program would find the line and change it.
But as I started the implementation, I quickly found out that the manipulation with the text file was very exhausting.
I thought about SQL database, or JSON files, but I don't know, if that's a good idea, and where to start. I'm rather new to java, so even opening a text file was a bit of a hassle for me.
Any thoughts?
Thank you.
Since it is for personal accounting and likely small, you could think of it like any document editing program (Notepad, Word, Excel, ...), meaning:
No login. Each person will have a separate file chosen when you start program.
Load entire file into memory.
Nothing is saved until user clicks "Save" (unless you want some auto-recovery logic in case of program/machine crash).
That means that there are only two operations on the file (Load and Save), and both should be fairly simple.
Advantage: Simple and very fast.
Limitation: Memory constraint if file grows very large, and potential for data loss if auto-recovery/auto-save is not implemented.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
How would you add system recovery in a multi-threaded environment? For eg: if you have a system where multiple threads pickup files and process them and persist them in database, how would the system recover if there is a database failure you
dont want to process the trades again?
There are many different ways to answer this question depending on your system setup. A little bit more of an explanation would help. I have still provided an example that could possibly work for you though.
I would probably look at a way to mark a file as in process (i.e. database record or moving to a different directory to process the file). Then I would mark the file as finished when it processed (moving it or doing it some other way).
There is a still a possibility of failure after finish processing and marking the file as finished. However, this would limit the amount of files you need to look at for recovery.
If you can keep track of what files were read in the same database you can batch all your databases changes as well as the flag to mark the file as read. You can avoid committing the connection until your have done the changes and flagged the file.
This also has an issue of if the database crashes mid commit, but at the same time you would probably have to restore a backup in this instance and rerun all the files again.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I'm developing a REST client for medical web-service for both Android and iOS platforms. After reading of different articles and blog posts I understood that I should always persist data in order to increase app speed, user experience, save network resourses and to allow user to work offline with some data. I decided to use local storage based on Couchbase Lite. But this question is data storage independend. I'm interesting in the best ways of implementing it. Currently I ended up with the next wokflow:
When the user first logs in I fetch some portion of recent data (in my case it is patients' health records and some reports.)
Then in background I populate my storage with the rest of data
Syncronize my data on push notifications to always store the last server data copy.
But I have a few questions : What is the normal size of local storage? May be some time client data can increase a lot - in such case I will delete the oldest data from the device then in order not to exceed some predefined limit? Let's say it is 50 - 100 Mb. Or I should allow user to control this and give him some interface to delete reports, records? Does the workflow I describe correct? Or may be do wrong something from the technical and UX points of view?
You should look at downloading an index (names and locations of resources, not the actual resource data) which lists all of the available resources. Display the list, and indicate for each item whether it's available locally or not. When selected, display (downloading if required). Allow editing to delete the local copy of the resource (with a select-all button).
As for storage size, this is device dependent. Apps store what data they need. The question is how much data users will be happy for you to save locally. Give them the option. You could also have a settings screen which offers to delete old resources (not accessed recently) when the size gets above XX Mb.