I'm developing an application (Java & JavaFX) that writes/reads data (a file). The problem is I don't want to restrict user to run only one instance (of my app) at a time, as I really can't think of reliable way of doing that so it works on both Windows and Linuxes (e.g. server), heard of sockets and files - both are defective IMO. As user is able to run multiple instances, writing/reading data (from a file) seems really messy, because there's no guarantee that file locking will work reliably on Windows and Linuxes (FileLock documentation - click here).
To sum up: I can't restrict multiple instances of my app, but that leads to problem with writing/reading data (from a file).
Is there anything I missed? Maybe there's some other way to solve my problem I can't think of? How do the "big" popular programs handle that?
Suggested: Use a socket solution
You could follow the techniques outlined in an answer to:
JavaFX Single Instance Application
FAQ
Addressing some additional questions:
heard of sockets and files - both are defective IMO.
You state your opinion that using sockets to set up a single instance application won’t work well enough for you. You are in the best position to decide that.
For some apps which want to achieve a single instance, the socket-based or file-based solution outlined in the answer to the linked question or other comments will work well enough.
"What happens if more than one user try to run the application? Won't they conflict on opening the socket?"
Prevent launching multiple instances of a java application
And:
Also, I can't be sure if chosen port (fixed, since all instances should check for one port) is being used by some other applications/processes
You may be able to address some of these concerns by enhancing the socket-based solutions outlined in the linked questions.
Enhanced Socket Solution Outline
If you want, you can write an enhanced algorithm to deal with some of these issues.
When another app instance startup occurs, you try to connect to a current instance on a well-known socket.
Check the response to the connection.
If it doesn’t respond with the correct protocol response (e.g. matching user and app name) then increment the port by 2 and retry.
Test the response again until either:
You get a match for the app/user combo, then send a signal to that app to display itself.
OR
If you get no match, then create a new instance on the tested open port.
I'm not suggesting you do that, just explaining that it is possible.
Alternative: OS native service
There are also other OS-specific mechanisms for handling this such as Windows or Linux services which you can investigate if you want, those approaches are involved and vary by OS, so I won’t discuss them in detail here.
For the OS-specific solutions, you usually would:
Create a native package for your app (e.g. using jpackage)
Install it.
Have the installer config the app as a service
e.g. on linux, create an init.d script with a pid file configured via chkconfig.
The service launches on boot and stops on shutdown.
The app is then accessed via a tray icon or something similar
The means of interaction is often OS version specific.
Alternative: Allow multiple app instances but use a single database instance
You may also consider using a database rather than files for data storage, as a database system can help solve many of the concurrent access issues which can arise with file based solutions. Multiple clients can connect to the database, and the database and your app code can handle locks and collisions on the data access to ensure data integrity is contained. Using such a solution, there is no need to enforce that a single application instance is running for a user (at least from a data integrity perspective).
Related
I'm making a Java desktop application for a client and they've requested an offline platform where by a 'master' version can distribute 'slave' applications to collect data and have the master process it afterwards.
The only part I'm not sure how to implement regarding the app is this master/slave system.
Do I have to program two different applications? One with less functionality? Do I have the same application, then have the master one output a file that the others read? Would they be the same application but when reading have less functionality?
I'm just not sure how to do this. Any tips?
Whether to bundle them to same app or not is NOT the most important part of the problem.
The logic will surely be different for the "master and slave", and it is most possibly that you will arrange the different pieces of code in different classes etc.
Put your effort in thinking of how you are going to do the distribution logic, instead of thinking of thinking of whether you are serving one or two applications. You can easily choose from them once you have the codes ready.
For the mechanism of doing the master-and-slave processing, what you have described is simply too vague for other people to give proper suggestion.
Firstly Cheers to all PROGRAMMERS [ Today = Programmers day :) ]
Secondly,
I'm working on a project where the specifications require using a server as a front end and an application in the back end. The project is an advanced smart home system. The server will handle commands coming from the client through the internet (let's say like a remote control from outside the house) and send them (through a channel of communication) to the application (planning on using JAVA application) which will handle the main logic like controlling hardware stuff (lights ...) , reading from a microphone (local mic) and accessing a database to act as a speech recognition system (offline).
Now I'm still in the planning phase and I'm not sure which technologies are the best for this project. I'm thinking to use Node.js or Apache as the server and a JAVA application as the back end and any SQL database for the application's SRS.
I hope this illustration demonstrates clearly how the system works:
The main question is:
What is the best way to make the Java application communicate with the server (communication channel [must be bidirectional]) ?
and Do you recommend a specific server other than the mentioned ones for this job ?
What crossed my mind so far:
1- JSP and servlets (making the server is the application too). But I don't want a server to handle the offline stuff and I'm not sure if java servlets can access hardware interface. I also want the server to be separate from making critical decisions (different layer for security reasons and since it won't be used as frequently as the local [offline] system).
2- Communication channel :
A- A shared file, but it's a bad idea since I don't want the application to check if the file contents changed (command received) or not from time to time (excessive operations).
B- A an inter-process-communication through a port (socket communication) seems the best solution but I don't know how that would turn in terms of operation cost and communication errors.
OS used : Linux Raspbian
EDIT:
I'm sure ZMQ+Apache is good enough for this task, but how is it in comparison to WebServices (like SOAP) ? Would WebServices be a better solution in terms of standard implementation and security ?
All related suggestions are welcomed, TQ
ZeroMQ is great for internal communications, or any other similar communication solutions.
For specifically your case, I can see that ZeroMQ would be a best fit.
Reasons:
You offline server have to be agnostic to web solution.
Communication can be reliable and bi-directional, possibly another patterns like (pub>sub, req<>res, etc).
Restarting any of sides would not require to restart the sockets (connection) on other side, as messages are queued.
Possibility to scale not just on same hardware, but as well to local area network or even through internet.
Big community of support. It might look a bit hard to get into, but in reality it is dead simple, just go to examples and once concept understood - it is very easy and neat to work with.
ZeroMQ has lots drivers for most popular languages, that includes Java and Node.js.
Considerations:
You need to think over packets and data will be sent. So some popular data protocols like XML or JSON is good way of thinking.
Responsibilities over different services - make sure they are not dependant on each other too much. Or if main offline server - is a core of system, make sure it does not depend on web facing service, so that web face can be removed/replaced/improved etc.
Few more points to think about:
Why Java, and what about modular approach? For example if you want to expand and scale - add more sensors into smart home solutions, then having one giant application would require to change it, it is harder to maintain as well as maintain different clients with own needs. Think modular way - some core functionality for offline stuff, but many aggregator processes that would talk to different sensors. This makes easier to support different setups and environments, as well maintain the system as a whole by improving independent components.
I'm writing a game app on GAE with GWT/Java and am having a issues with server-side persistent data.
Players are polling using RPC for active games and game states, all being stores on the server. Sometimes client polling fails to find game instances that I know should exist. This only happens when I deploy to google appspot, locally everything is fine.
I understand this could be to do with how appspot is a clouded service and that it can spawn and use a new instance of my servlet at any point, and the existing data is not persisting between instances.
Single games only last a minute or two and data will change rapidly, (multiple times a second) so what is the best way to ensure that RPC calls to different instances will use the same server-side data?
I have had a look at the DataStore API and it seems to be database like storage which i'm guessing will be way too slow for what I need. Also Memcache can be flushed at any point so that's not useful.
What am I missing here?
You have two issues here: persisting data between requests and polling data from clients.
When you have a distributed servlet environment (such as GAE) you can not make request to one instance, save data to memory and expect that data is available on other instances. This is true for GAE and any other servlet environment where you have multiple servers.
So to you need to save data to some shared storage: Datastore is costly, persistent, reliable and slow. Memcache is fast, free, but non-reliable. Usually we use a combination of both. Some libraries even transparently combine both: NDB, objectify.
On GAE there is also a third option to have semi-persisted shared data: backends. Those are always-on instances, where you control startup/shutdown.
Data polling: if you have multiple clients waiting for updates, it's best not to use polling. Polling will make a lot of unnecessary requests (data did not change on server) and there will still be a minimum delay (since you poll at some interval). Instead of polling you use push via Channel API. There are even GWT libs for it: gwt-gae-channel, gwt-channel-api.
Short answer: You did not design your game to run on App Engine.
You sound like you've already answered your own question. You understand that data is not persisted across instances. The two mechanisms for persisting data on the server side are memcache and the datastore, but you also understand the limitations of these. You need to architect your game around this.
If you're not using memcache or the datastore, how are you persisting your data (my best guess is that you aren't actually persisting it). From the vague details, you have not architected your game to be able to run across multiple instances, which is essential for any app running on App Engine. It's a basic design principle that you don't know which instance any HTTP request will hit. You have to rearchitect to use the datastore + memcache.
If you want to use a single server, you can use backends, which behave like single servers that stick around (if you limit it to one instance). Frankly though, because of the cost, you're better off with Amazon or Rackspace if you go this route. You will also have to deal with scaling on your own - ie if a game is running on a particular server instance, you need to build a way such that playing the game consistently hits that instance.
Remember you can deploy GWT applications without GAE, see this explanation:
https://developers.google.com/web-toolkit/doc/latest/DevGuideServerCommunication#DevGuideRPCDeployment
You may want to ask yourself: Will your application ever NEED multiple server instances or GAE-specific features?
If so, then I agree with Peter Knego's reply regarding memcache etc.
If not, then you might be able to work around your problem by choosing a different hosting option (other than GAE). Particularly one that lets you work with just a single instance. You could then indeed simply manage all your game data in server memory, like I understand you have been doing so far.
If this solution suits your purpose, then all you need to do is find a suitable hosting provider. This may well be a cloud-based PaaS offer, provided that they let you put a hard limit (unlike with GAE) on the number of server instances, and that it goes as low as one. For example, Heroku (currently) lets you do that, as far as I understand, and apparently it's suitable for GWT applications, according to this thread:
https://stackoverflow.com/a/8583493/2237986
Note that the above solution involves a bit of fiddling and I don't know your needs well enough to make a strong recommendation. There may be easier and better solutions for what you're trying to do. In particular, have a look at non-cloud-based hosting options and server architectures that are optimized for highly time-critical, real-time multiplayer gaming.
Hope this helps! Keep us posted on your progress.
I intend to create a java program/service that continuously polls rss-feeds using the informa library 'poller' functionality. I want to be able to add,delete,update the rss-url's realtime, while the program is running. I have no prior experience with the informa library but I need it to potentially scale to a lot of rss-feeds.
Does anyone have have experience with the informa library for polling rss-feeds? What other method/libraries would you consider to poll a lot of rss-feeds (10.000+)?
What do you consider to be an accepted solution to control a running (console) java program. I was thinking about using a control port for sending commands. Are there other mechanisms more commonly used to achieve this functionality?
Please let me know if you need more specific information.
Kind regards,
Ivo
What do you consider to be an accepted solution to control a running
(console) java program. I was thinking about using a control port for
sending commands. Are there other mechanisms more commonly used to
achieve this functionality?
You can read the parameter from a .properties file. The only disadvantage with this is that the properties file will have to be read in each time you want to use that property, irrespective of whether the value has changed.
You can make use of JMX. This is a fairly nice concept in which you expose a bean to be manageable using the jconsole command (Java Management Extensions Console). Once done, you can then remotely inject values into a running JVM.
There is a nice example on Sun Oracle website that shows you how to do it.
Yes, a normal way to interact with a remote service would be a control port as you described it.
You can also control it via Database settings and create a thread which will poll for these DBs settings. The DB settings will be set via some web? UI.
If you plan on running one service with polling on one single machine I would rather recommend against it and set your service either on virtual machines or setup multiple instances of the service on one big machine with a big amount of memory. I have been using a com.sun.syndication library for feeds parsing/retrieving.
I don't want to be a captain obvious but I think it's easily achievable with a usual multi-threading application and Concurrent Queueing. If I got you correctly.
Thanks.
Now there is a new requirement. I have got some adhoc work at hand. The requirement is to connect a desktop based Java application to read data from Mainframe generated by some CICS Transaction. [Basically I have to read all the records being appended into a file (same way as we do tail-f filename in linux). This is just for FYI my requirement is something different.]
I inquired, and came to know that my employer cannot provide MQ or CICS Transaction Gateway access to me. He suggested some method of screen scraping. I have already done that using VB.Net application and Quick3270 as well as IBM Communicator Emulators. Both these emulators provide functions which can be used to read whatever is there on the screen.
You can refer to EHILLAPI programming details (Language for Emulator programming) - http://publib.boulder.ibm.com/infocenter/pcomhelp/v5r9/index.jsp?topic=/com.ibm.pcomm.doc/books/html/emulator_programming07.htm if you are interested in learning.
But this method is restricting me to the maximum number of bytes that can come on the screen. With this method there is significant network delay as I have to refresh (basically move from one page to another on CICS) everytime to get data which is spanning across multiple pages.
Can you suggest me some method so that the my employer does not need to ask the client to open any port on his Mainframe or install any software (as this is not possible for my employer).
Can I use the 3270 terminal emulation and retrieve all (or at least more data). This way the requirement of my employer is fulfilled and he does not need to ask anything to his client. (In any case from the emulator we are firing CICS Transactions). We want everything to be done at my employer's end itself without disturbing the client's Mainframe even a single bit.
Please do not suggest MQ as the client does not have it.
If you are still suggesting CICS Transaction Gateway, then please let me know how would I connect to the remote machine (I need technical details).
- What information do I need to ask from the client.
- What software do I need to install on my machine.
- Technical details of using that software.
Regards,
Nitin
I have two suggestions for you to look at. I have done both successfully. Your client setup can decide if either is palatable (the question doesn't mention not doing these things).
You can call your CICS code on the mainframe via a DB2 stored procedure. There is a standard one supplied by IBM called EXECCICS that we used for a project. You supply the standard CICS parameters and comm area. The stored procedure executes the program in the mainframe and returns you the comm area. You use JDBC. This solution is simple and easy the execute.
We have also enabled HTTP access to the CICS program on the mainframe. To my understanding (remember I just called it -- not enabled it) it is a pretty standard configuration. The client code just performs an HTTP POST to a specific end point. The resulting document is the comm area plus other goodies.
These solutions were developed independently for the same project and are both in production. The only reason the HTTP method was added to the mix was because of a data size limit in the stored procedure that HTTP removed.