I'm looking for some critique on my approach for storing the state of a bitmap editor for Android and iPhone mobile phones. Even a "Looks fine to me!" response would be great!
In the application, the current user document contains several bitmap layers (each maybe 1024 by 768 pixels) that can each be painted on. The basic requirements for the application are:
I need to be able to save and restore the document state.
When the user quits the application or gets a phone call, I need to be able to save the document state quickly (within about 2 seconds).
If the application crashes, I need to be able to restore the document state (it's OK if the user loses maybe 30 seconds of work though).
For 1, I cannot find any open file formats that support layers. I was going to go with the following file structure for storing my document:
document_folder/
layer1.png
layer2.png
...
metadata.xml
The layers are just stored as .png files and the .xml file contains data such as which layers are currently visible. The document folder can either be opened as is by the application or the folder can be stored in a .zip file. This seems like a nice simple format for other applications to work with too.
In addition to .png files, I will also allow layers to be saved in a custom .raw file format which contain unprocessed raw pixel data from bitmaps. I can save these very quickly on the phone (<0.5s) whereas .png files take a second or two.
My plan for quick-saving the document was, on start-up, to create a folder called /autosave, and save .raw versions of all the layers there. After a few editing commands on one layer, I would then update the .raw file for that layer in a background thread. For robustness when saving, I would save the layer as e.g. layer1_tmp.raw and when I've confirmed the file has been fully written, replace layer1.raw with this file.
Should the application crash during use, I would just reopen the /autosave folder. When the application is closed or the user gets a phone call, I just have to update the last modified layer to autosave. When the user wants to save, I just convert all the .raw files to .png files and then zip the folder.
What do you think? Are there any obvious flaws? Is there a simpler way? Am I reinventing the wheel somehow? Thanks.
Your idea souds good to me: save the layers in the background, as you go. Any layer that the user isn't currently editing should be queued to be saved as soon as they switch away from it to a different layer. If the app is interrupted, you just have to save the current working layer, which as you say can be done in 0.5s.
Why bother with the png format anyway? You only need it if exporting data to another machine/system, right?
I think you have a great plan there. I would probably go the same way (but that itself doesn't mean anything :-)
What I was thinking is if you could have not only a worker thread saving the file but a complete background service (with a worker thread of course, since a service itself also runs in the main thread).
This way you would have guaranteed that there is always something alive that can handle your layer deltas, regardless if the drawing activity has crashed or someone is calling you. Suddenly you don't have the same timing constraints (the write operation can take 10 seconds if it wants to, your activity is neither blocked, nor dependent of the write operation). Of course your service would then commit suicide when it has emptied its save queue (to save system resources).
What I don't know in order to further promote this idea is how much data you're writing to the raw-file? Do you write the complete 1024x768 layer every time or do you only rewrite the changed parts? I'm also unsure of how the data would actually be transmitted to the service (from the Activity). I don't know if there is a maximum size of a byte-array-extra an Intent can handle.
Hope this gives you further ideas.
Cheers!
Related
In my program there's requirement to read file content at the time of program startup. Right now file is loaded in the constructor. Since file is a bit huge and I/O is being performed so it takes a bit of time to show screen.
My question is, can I use Preference API as an aternative of file I/O? Since content is not changed frequently so I want to avoid I/O unless it's asked by user. I would like to load content once if preference not set and as long as preference is not empty it fetches content from Preference rather than file.
Share your thoughts.
#Zhedar's point about Preferences is well taken. Instead, load the file in the background using SwingWorker, as shown here. With suitable granularity in publish() and process(), results can begin appearing immediately. The GUI will remain responsive while the remainder of the file loads.
I would say it wouldn't make a difference if performance matters.
A Preference is nothing but a file on your disk, too. In fact on UNIX and Mac OS X-Computers it would be saved in an XML-File, Windows puts Preferences in the registry. You may look in the wrong direction here.
Is your file that big? Then don't load it in the main thread.
To achieve that, do long running IO-Operations in a different thread and show a loading screen or something like that instead. Since you don't said which UI technology you use I cannot provide any specific information to achieve that. If you're using swing I recommend you to take a look into the SwingWorker class.
I need to implement a java application on MSWord Document management(or for any Text Doc Editor) and in that the word documents will be stored in a central repository like database or in server.Many users will be handling the documents, which requires complete version controlling. All the edits/changes made should be saved in the central document.
My question is, is it possible to save the changes on a central document(located in database) after opening and editing it on a local document editor? i mean, while accessing and editing the central document, i think they will be doing that on a local copy and how just a document save action can sync that change to the central copy? Is there a way to implement it through code, something like on save event trigger?
All the changes that users make in their copy should be updated on the central copy. I plan to implement something like working of google docs in this kind of a scenario. Inviting your valuable suggestions and useful links if any?
In Java you can listen to the File System and get notifications when files change so in theory you could use this to see when the document has been edited and upload it to the server.
However you'd need to solve these problems:
If the Java application closes/crashes while Word is open, nothing will detected a save and the user's changes to their document will get lost and they will have no indication why.
People save Word Documents all the time when editing them so not to lose data, but they'd only want the final save checked in as their revision.
Multiple users editing the same document at the same time and overwriting each other's changes.
There's no way in a silent automatic process for users to enter comments describing their changes.
It may be preferable to have a manual check-in/check-out upload/download process rather than trying to do it all automatically.
This is what I'd consider:
A (Java in your case) GUI application, which shows the documents in central repository, and allows launching Word (or whatever) to edit, which then checks out and also locks the document in the repository. Then when Word (or whatever) process exits, changes are committed (possibly after a popup confirmation dialog, where user could choose to discard their edits, or be required to enter a comment about their change) and repository lock on the file removed.
Then you also need some mechanism to handle stale locks (for example, when user PC got shut down / crashed without releasing the lock). Be sure to use a VCS which works well with binary files, and which supports file locking at server side, and code the app so that you can easily change the VCS software if your first choice isn't ideal.
So your Java app might do most of it's work by execution other programs (VCS binaries for checking out / locking / committing, GUI application for editing). While the app seems fairly simple, it might still be worth it to look at something like Netbeans RCP or Eclipse RCP and build your app on top of such a platform. I haven't actually used them myself, so take that as an idea to consider and research, not as a recommendation.
I have to create a jar with a java application that fulfills the following features:
There are xml data packed in the jar which are read the first time the application is started. with every consecutive start of the application the data are loaded from a dynamically created binary file.
A customer should not be able to reset the application to its primary state (e.g. if the binary file gets deleted for some reason, the application should fail to run again and give an error message).
All this should not depend on the os it is running on (which means e.g. setting a registry entry in windows won't do the job)
Summarizing I want to prevent a once started application to be reset in order to limit illegitimate reuse of the application.
Now to my ideas on how to accomplish that:
Delete the xml from the jar at the first run (so far I came to the understanding that it is not possible to let an application edit it's own jar. is that true?)
Set a variable/property/setting/whatever in the jar permanently at the first run (is that possible)
Any suggestions/ideas on how to accomplish that?
update:
I did not find a solution for this exact problem, but I found a simple workaround: along with my software I ship a certain file which gets changed after the program is started the first time. of course if someone keeps a copy of the original file he can always replace it and start over.
Any user able to delete the binary file, will, with enough time, also be able to revert any changes made in the jar. When the only existing part of the application is in the hand of the user, you won't able to prevent changes to it.
You can easily just store a backup of the original jar, make a copy, use that for one run, delete, copy the original jar, etc. You would need some sort of mechanism outside the users machine, like an activation server. The user gets one code to activate an account, and can't use that code again.
I have an existing swing desktop application that I wish to convert to a web application. The first thing that is stopping from doing so is that the desktop application deals with writing and reading from PDF files. Also the user fills up the PDF forms which needs to be read by the application.
Now a typical use case in the desktop application is like, the user logs in opens a PDF form and fills it up. The swing application manages where the file is stored so it goes to the file and reads the form, extracts the data and stores the data in the db. The user might not fill up the form all in one go. He might save it come back to it later and continue.
All of this needs to be done by the web app now. My problem is I don't want the user to download and upload the form multiple times to the server. That would eat the bandwidth and also asking the use to save the file locally and upload it back once he completes filling the form doesn't appeal to me since the desktop application nicely used to manage the location of these files as well.
Would I need to implement something like a dropbox kind of thing? A small deamon running continuously to check what file has been updated and upload it to the server? That would be difficult since at the server I wouldn't know if the file was latest or not. Is there anything like this that someone might have done before?
I have anther suggestion: why don't you show the user a form with the same fields and transfer them to the PDF after the user submits. This way the Pdf does not leave the server and you transmit just the minimal amount of data.
Switching to a web-version of the application may force you to re-think some of the way you are doing things. Certainly browsers are intentionally limited in their access to the local file system which introduces a major hurdle for your current mode of operation.
Even if you could display the PDF within a browser, detect the completion of edits and send this back to the server from within browser code (which is probably possible), you'll be subject to different browsers doing different (strange) things with whatever pdf plugin is installed.
As Vitaliy mentioned already, switching being able to populate a (web) form in the browser means that whole download upload problem goes away. But then you have to take what the user has done in a web page and pump that into a PDF somehow. If you don't HAVE to start with a PDF, but could collect the data and produce a PDF at the end then you might have more options. For example you could use iText to create a PDF directly if you don't have too many styles of document to work with. You could use something like Docmosis which you can give templates to and get it to populate and render PDFs later. With the Docmosis option you can also ask Docmosis for the list of fields in the template so could build a web form based on the selected template, let the user fill it in, then push that data to Docmosis to produce the file.
Hopefully there's some options there that are useful to you.
Adobe documents how to do this here. Never underestimate the power of design-by-google. I know this can be made to work because I've used PDF forms on line.
I worked on a similar issue a few years ago, although I wasn't dealing with signed forms. The signature definitely makes it a little more difficult. However, I was able to use iText to create a PDF form with fields, and only submit the field data back to the server. Offhand, I unfortunately do not remember exactly what/how we did it, but can confirm it is doable (with limitations/caveats). Ex: User had to have PDF reader plugin installed & User was forced to d/l the pdf every time.
Basically what I did was use iText to create an FDF from a PDF (with form existing form fields). The submit button in the FDF actually submits the form data to a URL of your choosing (not unlike an HTML form). Once you have that data, I believe I merged the form fields (from the FDF) with the PDF on the server side using iText.
Once you use the server to maintain all the form data, the synchronization/locking process you use to ensure that a single user is updating the latest and greatest form data is up to you.
Your comment under jowierun indicates that you want to deal with word/excel/etc docs as well, so I am not entirely sure I am understanding your needs. Your initial post discussed the needs to fill out PDF forms locally, but afterwards it sounds like you are looking for a file-sharing system instead.
Can you please clarify exactly what you are trying to accomplish?
Background:
Our software generates reports for customers in the usual suspect formats (HTML, PDF, etc.) and each report can contain charts and other graphics unique to that report. For PDFs everthing is held in one place - the PDF file itself. HTML is trickier as the report is basically the sum of more than 1 file. The files are available via HTTP through Tomcat.
Problem:
I really want to have a tidy environment and wrap the HTML reports into a single file. There's MTHML, Data URIs, several formats to consider. This excellent question posits that, given the lack of cross-broser support for these formats, ZIP is a neat solution. This is attractive to me as I can also offer the zip for download as a "HTML report you can email" option. (In the past users have complained about losing the graphics about when they set about emailling HTML reports)
The solution seems simple. A request comes in, I locate the appropriate zip, unpack it somewhere on the webserver, point the request at the new HTML file, and after a day or so tidy everything up again.
But something doesn't quite seem right about that. I've kind of got a gut feeling that it's not a good solution, that there's something intrisically wrong with it, or that maybe a better way exists that I can't see at the moment.
Can anyone suggest whether this is good or bad, and offer an alternative solution?
Edit for more background information!
The reports need to persist on the server. Our customers are users at sites, and the visibility of a single report could be as wide as everyone at the site. The creation process involves the user selecting the criteria for the report, and submitting it for creation to the server. Data is extracted from the database and a document built. A placeholder record goes into the database, and the documents themselves get stored on the fileserver somewhere. It's the 'documents on the fileserver' part that I'd like to be tidier - zipping also means less disk space used!. Once a report is created, it is available to anyone who can see it.
I would have thought the plan would be that the zip file ends up on the client rather than staying on the server.
Without knowing about your architecture, I would guess at an approach like this:
User requests report
Server displays report as HTML
User perhaps tweaks some parameters, repeats request
Server displays report as HTML (repeat until user is happy)
On each of the HTML reports, there's a "download as zip" link
User clicks on link
Server regenerates report, stores it in a zip file and serves it to the user
User saves zip file somewhere, emails it around etc - server isn't involved at all
This relies on being able to rerun the report to generate the zip file, of course. You could generate a zip file each time you generate some HTML, but that's wasteful if you don't need to do it, and requires clean-up etc.
Perhaps I've misunderstood you though... if this doesn't sound appropriate, could you update your question?
EDIT: Okay, having seen the update to your question, I'd be tempted to store the files for each report in a separate directory (e.g. using a GUID as the directory name). Many file systems support compression at the file system level, so "premature zipping" probably wouldn't save much disk space, and would make extracting individual files harder. Then if the user requests a zip, you just need to build the zip file at that point, probably just in memory, before serving it.
Once a report is created, it is
available to anyone who can see it.
that is quite telling - it means that the reports are sharable, and you also would like to "cache" reports so that it doesnt have to be regenerated.
one way to do this would be to work out a way to hash the parameters together, in such a way that different parameter combinations (that result in different a report) hash to different values. then, you can use those hash as a key into a large cache of reports stored in disk in zip (may be the name of the file is the hash?)
that way, every time someone requests a report, you hash the parameters, and check if that report was already generated, and serve that up, either as a zip download, or, you can unzip it, and serve up the html as per normal. If the report doesnt exist, generate it, and zip it, make sure to be able to identify it later on as being produced by these parameters (i.e., record the hash).
one thing to be careful of is that file system writes tends to be non-atomic, so if you are not careful, you will regenerate the report twice, which sucks, but luckily in your case, not too harmful. to avoid, you can use a single thread to do it (slower), or implement some kind of lock.
You dont need to physically create zip files on a file system. Theres nothing wrong with creating the zips in memory, stream it to the browser and let GC take care of releasing the memory taken by the temporary zip. This of course introduces problems as it could be potentially ineffecient to continnally recreate the zip each time a request is made. However judge these things according to your needs and so on.