Transform local software into online software - java

We've developed a management software specifically to very small businesses. But, unexpectedly, several bigger businesses liked it and started to use it. The problem is that our software works only locally (one store) and the bigger businesses want us to make the software to work online (two or more stores at different places but with the same database).
So, we would like to make our local software to work online, utilizing our already existent JAVA Swing user interface.
We've thought about some solutions but, as it will be a big change, we would like to know what is the best way to proceed.
Important information:
The user interface is JAVA Swing and the database is Postgresql.
We have thousands of customers using our software and they would like to use it online too.
Below are the solutions that we've thought about. Please, let us know if there is a better way.
Solution #1
A single database in internet and all clients connected to it.
Drawbacks:
Every single query would have to access internet.
The source code of the clients would need to have the database password. Then the security would be totally compromised.
Solution #2
All clients would have their own database in internet, with different passwords.
Drawbacks:
Every single query would have to access internet.
Would need thousand of databases.
Difficult to maintain.
Solution #3
A single database in internet, but clients connecting through a web service that validates the customers login data and returns the queries results.
Drawbacks:
Every single query would have to access internet.
The construction of the web service would be a little complex and it would have to return the results in somehow we don't know (maybe simple csv text or xml).
Solution #4
There would exist a single database in the internet for all our customers.
Also, all the clients would have their own database locally, so they could do fast select queries.
Every update query would be firstly sent to a web service that would execute the query at the online database and return if it were successfully done.
Besides that, we would have a mechanism to synchronize the local databases with the online database from time to time.
Drawbacks:
Very complex and difficult to implement.
The synchronize mechanism would require high processing.
Is there a better way? How?

I would go with Solution #3. Build a database-backed service/API, and have the desktop client authenticate itself and use it. I would avoid having a local database as in Solution #4. You cannot rely on your users not to accidentally mess with it somehow and cause synchronization to be lost or corrupted. In addition, having a local database will slow you down when you want to create a different client, for example a mobile app.
If you decide to go with Solution #3, the current de facto standard is JSON-based REST API. Also, you should note that there are many caching techniques that can be used which will reduce the number of queries actually run.

Related

Should my users connect directly to the database? How to connect users to database;

First i have to mention that I am fairly new to databases;
Currently i set up a MySQL server and i am unsure how to connect my users to the database;
I want them to be able to use my software application to send certain predefined queries ( SELECT / ALTER ).
Currently how i'm doing this is : I ask for an username/password in my first form.I then connect to the database using a default username : "javaApp" / password :"javaApp" and then i check if the data given in the login form corresponds to an user in the UserTable;
Can someone offer an insight about how to do this more efficently / more professional / more safe?
Maybe some web-services or something? I really don't have any ideea.
How is this solved by professional software developers?
Thanks in advance!
I would not expose the database directly. I would use a layer between the user and the database so that you can control how they can query the database and what they can retrieve.
Do not define the database connection in code. Rather, use a JNDI resource (especially if you are running on top of an app server e.g. Tomcat which manages JNDI resources for you). This will decouple your code from the connection information. Also, you can use things like connection pooling with JNDI which will enhance the performance of your app (and will also handle things like connection closed). Concurrency will be handled by the connection pool. See this Oracle article on JNDI resources and this one specific to Tomcat.
Use a service account i.e. an account that will be used to authenticate the app to the database. That account has nothing to do with the user accounts of the individuals accessing your app. Limit what the service account can do. If all you want to do is read data via the app, then limit the service account to just that (at least in the beginning).
You can use an ORM e.g. Hibernate in Java that will give you an abstracted view of the database. That could be too much for you though if all you have is a single table. In that case, use something called a PreparedStatement.
You want to be weary of letting users define their own SQL statements. That opens you up to a lot of SQL attacks (look for SQL injections).
Never ever store user credentials in the clear in a database. Hash the values.
Lastly, have a look at these awesome best practices.
That depends a lot on what you want to achieve in terms of security, ease of use etc., I think. Each have their advantages and disadvantages. Often it's a trade off.
Connect with one database account for all users
Advantages:
Flexibility: You are not bound to the methods of access restriction the DBMS provides.
E.g. you can include business rules in the rights management of your application that might not be applicable to the rights management of the DBMS.
It's possibly easier to change the DBMS too, if you didn't use any specialties of its rights management. If the application stays the same, the access controls stay the same.
You can design the user administration in your application as you like, making it as easy and convenient as possible for you. If you use the DBMS to administer the users, you are restricted to the tools it provides for that job.
If the DBMS has a bug regarding the rights management and the respective company or community isn't providing a patch, you cannot do much. If such a case occurs in your program you can fix it yourself.
Use the tools you like an have mastered: It might be easier for you to do it properly in your application, simply because you know it and the respective programming language well. Maybe you don't know the DBMS that well (yet).
That might have an impact on the quality of your solutions.
Disadvantages:
Single point of failure: From the point of view of the DBMS all your users are the same user.
That is, if this account gets into the wrong hands, for whatever reason, then all is lost. Of course you can (and should!) restrict such a single DBMS account as much as possible on the DBMS' side. But anyone who has access to this account can do whatever the most highest privileges of your application allow.
Also logging who did what can be circumvented in such a case. As the intruder has all the rights your application has, and the DBMS has no way of distinguishing who that might be, they can mimic all the users they like. Maybe even manipulate the (in database) logs. Finding the traces of an attack might be harder.
The password (or whatever credential) for the single database account must be known by your application in order to supply it to the DBMS upon logon. Obviously and especially you cannot "outsource" this knowledge to the users' brains. If they knew it, they could easily bypass your application by simply using a generic client (e.g. MySQL Workbench) doing things your application had never allowed. And attacks from inside are not that uncommon. E.g. some user who manipulates the work of others, to look better before management regarding promotions. Or somebody who feels betrayed by the company and wants to take revenge. You name it. (Or replace company by community if it's not a commercial project.)
It must be physically stored somewhere, where your application has access to it. Though there might be measures that allow handling that well and secure, it's still more exposed than being just in a user's head.
And if you have to change the password for that single account, you might have to deploy it somehow to all installations of your application. This might not be a (big) issue, depending on how your application is deployed though.
Lesser well tested: If you take one of the well known DBMS out there, you can be sure a lot of other people use it. A lot of people have tested it. Some may even have explicitly audited it.
The probability that one of these DBMS has a bug, that allows an account to break free from its restrictions, yet not 0, might be far lower than some hidden flaw in your maybe lesser well spread and supported application.
(Please don't take that as an insult. I don't doubt your skills or anything. But I simply assume you are no heavyweight cooperation as e.g. Oracle with some billion dollars of resources you could throw in, or (yet) have a large open source community behind your back with a lot of willing volunteers.)
For each user, a different database account
Advantages:
Reducing the impact: You don't have all the problems the single point of failure aka single DBMS account for all users comes with.
Of course, if an attacker gets the superuser account, OK, then you're done, either way. But even if someone can steal the account data of one of your regular users, they cannot do more as this user could have done. Of course you'd have to keep the rights here as low as possible as well to reduce the possible impact of such an incident.
As the DBMS distinctively knows who it's dealing with, user action can be logged safe from possible manipulation. You can reconstruct what was done and who did it. And be sure the logs don't lie to you.
You can "outsource" the logon credentials to the users brains. There's no need, that it has to be stored physically somewhere accessible for your application. It's less exposed. (Provided that the users don't write it on some postit next to their keyboards or such funny things. But, that's another story...) And if a password needs a change for one user it doesn't affect other users or the installations of the program at all.
Well tested, well supported: As I stated above, if you take one of the well known DBMS out there, you can be sure a lot of other people use it. It's been tested well may have even been explicitly audited.
The probability of a vulnerability in the DBMS can therefore be a lot lower than regarding your application.. And a fix for such a bug might be readily available by the supporting community or company.
It might not be an important thing for your project. I cannot judge the scale. But I don't want it go unmentioned, that certain DBMS provide provide means to integrate in complex IT landscapes of an organization. That is also regarding the user administration. For example in SQL Server Active Directory users can be used. That might help to overall simplify, centralize and/or standardize the user administration within an organization and therefore make it less prone to errors. It might also be convenient for the users -- a single sign on can be provided.
Disadvantages:
Less freedom: Using the means of the DBMS for user and rights management, you have to adhere to its rules.
The DBMS might not provide a certain means or a certain level of access restriction you need. You may find it impossible to make the DBMS' rights management behave upon some of your business rules. In such a case, you cannot do much about it.
If you want to change the DBMS you may find, you have used a lot of the specialties in user management of the old one, that cannot be implemented in the new one. You may have to start from scratch.
You're bound to the support of the company or community, that produces the DBMS. If there is a bug in the rights management, that poses a danger to you and the company or community is not providing a patch, there's not much you can do about it.
Might require learning: Especially when you're new to the DBMS world you'd have some learning to do to understand the means of access control a DBMS can provide in general and how a certain DBMS provides them or what it has to offer additionally.
Of course this doesn't come effortlessly and in the beginning you might make some mistakes.
Conclusion
Each of the methods have their own advantages and disadvantages.
Taking the "single DBMS user for all" route might be the right choice if the security requirements are not high. The greater flexibility might outweigh it.
On the other hand, the security level multiple DBMS accounts provides is far higher. If this is a concern, especially, when we're talking about sensitive data, this might be the better choice, or even the only one reasonable.
Maybe even a hybrid solution might be the right choice. Always have a fall back in the DBMS' access restrictions and on top of that, place your own in the application.
To decide which approach is best for a project, the needs of it have to be taken into account and the advantages and disadvantages have to be weighted against each other. There is no single best way I guess.

How to make our netty app scalable?

I'm starting a team of 2 to develop a chat server (both of us are college students), we made some research and found that netty is the most suitable for this kind of concurrent based app.
we never had any experience in developing server side application in java, this is our first time to tackle this kind of project and I just need the right direction for us to build this server the right way.
Our goal is to build something like, whatsapp, kik messenger, Line or weChat.
The real question is, how to make our netty app scalable? do we need to use redis for data persistent? do we need to use mysql for saving relationship or nosql database like mongodb?
Hope someone could guide us.
You could have a look at the documentation if you don't have done yet:
SecureChat example
Netty User Guide
The scalability is a complex answear. One could think of making your application multi-servers able (horizontal scalability), but then it really depend on how your information/context/session are available/updated...
You could think of course to use some Redis for data persistency.
On database usage, it mainly depends on how your data are and if you need relationship using SQL language or if your application can do it for you (to be clear, do you want the database making for you the join parts in your SQL command, or do you want to use the application doing that?). Also it depends on the amount of data (1 millions, 1 billions, ?) and the amount of connections too.
So all is your choice...
Then you can come back with some issues you've got.

GAE/GWT server side data inconsistent / not persisting between instances

I'm writing a game app on GAE with GWT/Java and am having a issues with server-side persistent data.
Players are polling using RPC for active games and game states, all being stores on the server. Sometimes client polling fails to find game instances that I know should exist. This only happens when I deploy to google appspot, locally everything is fine.
I understand this could be to do with how appspot is a clouded service and that it can spawn and use a new instance of my servlet at any point, and the existing data is not persisting between instances.
Single games only last a minute or two and data will change rapidly, (multiple times a second) so what is the best way to ensure that RPC calls to different instances will use the same server-side data?
I have had a look at the DataStore API and it seems to be database like storage which i'm guessing will be way too slow for what I need. Also Memcache can be flushed at any point so that's not useful.
What am I missing here?
You have two issues here: persisting data between requests and polling data from clients.
When you have a distributed servlet environment (such as GAE) you can not make request to one instance, save data to memory and expect that data is available on other instances. This is true for GAE and any other servlet environment where you have multiple servers.
So to you need to save data to some shared storage: Datastore is costly, persistent, reliable and slow. Memcache is fast, free, but non-reliable. Usually we use a combination of both. Some libraries even transparently combine both: NDB, objectify.
On GAE there is also a third option to have semi-persisted shared data: backends. Those are always-on instances, where you control startup/shutdown.
Data polling: if you have multiple clients waiting for updates, it's best not to use polling. Polling will make a lot of unnecessary requests (data did not change on server) and there will still be a minimum delay (since you poll at some interval). Instead of polling you use push via Channel API. There are even GWT libs for it: gwt-gae-channel, gwt-channel-api.
Short answer: You did not design your game to run on App Engine.
You sound like you've already answered your own question. You understand that data is not persisted across instances. The two mechanisms for persisting data on the server side are memcache and the datastore, but you also understand the limitations of these. You need to architect your game around this.
If you're not using memcache or the datastore, how are you persisting your data (my best guess is that you aren't actually persisting it). From the vague details, you have not architected your game to be able to run across multiple instances, which is essential for any app running on App Engine. It's a basic design principle that you don't know which instance any HTTP request will hit. You have to rearchitect to use the datastore + memcache.
If you want to use a single server, you can use backends, which behave like single servers that stick around (if you limit it to one instance). Frankly though, because of the cost, you're better off with Amazon or Rackspace if you go this route. You will also have to deal with scaling on your own - ie if a game is running on a particular server instance, you need to build a way such that playing the game consistently hits that instance.
Remember you can deploy GWT applications without GAE, see this explanation:
https://developers.google.com/web-toolkit/doc/latest/DevGuideServerCommunication#DevGuideRPCDeployment
You may want to ask yourself: Will your application ever NEED multiple server instances or GAE-specific features?
If so, then I agree with Peter Knego's reply regarding memcache etc.
If not, then you might be able to work around your problem by choosing a different hosting option (other than GAE). Particularly one that lets you work with just a single instance. You could then indeed simply manage all your game data in server memory, like I understand you have been doing so far.
If this solution suits your purpose, then all you need to do is find a suitable hosting provider. This may well be a cloud-based PaaS offer, provided that they let you put a hard limit (unlike with GAE) on the number of server instances, and that it goes as low as one. For example, Heroku (currently) lets you do that, as far as I understand, and apparently it's suitable for GWT applications, according to this thread:
https://stackoverflow.com/a/8583493/2237986
Note that the above solution involves a bit of fiddling and I don't know your needs well enough to make a strong recommendation. There may be easier and better solutions for what you're trying to do. In particular, have a look at non-cloud-based hosting options and server architectures that are optimized for highly time-critical, real-time multiplayer gaming.
Hope this helps! Keep us posted on your progress.

Ipad application - Access a java based web server

I would like to create a native ipad application that displays data fetched from a webserver. The application should be able to fetch tabular data, schedule things on the webserver and receive alerts
I suppose i could do the following
For fetching tabular data, use a single webservice call (will this work? what should be data interchange format? are there limitations to the data payload?)
For receiving alerts, would a persistent connection strategy work be the best way and are there better alternatives that i can tap into natively?
What remoting mechanisms are supported natively?
I have glassfish/spring setup.
Thanks
Having no idea of the data makes it hard to answer.
A successful method applied by many is the web service method, with simple query when the app loads, or is used, and fall back to show data that was loaded last time it had a connection.
If the data is time sensitive, this is more of a dilema.
You could simply note the last refresh time. If your app will be used primarily in the office, this might suffice.
Having a refresh button is a must.
The only reason to think about a persistent connection is if you want some form of server push. That is, do you need the server to inform the device of updates. Use cases for this are things like "chat".
Otherwise a timer asking for updates from the server is the way to go, since it is SO much easier to develop.
Apples toolbox supplies NSUrlConnection
Your iPad app and your web server would have to be very loosely coupled.
Your question is very broad at the moment. While you go, other question will arise.
One pointer though: You must find an exchange protocol that suits your needs (e. g. JSON) and implement this on both sides. The choice depends on your experience and the data you want to exchange.

Proper way to cache values from remote database in an Android application

I'm well aware of how I can communicate with an outside server using Android. Currently I'm using the relatively new AppEngine Connected Android project to do it, and everything works fairly well. The only thing that I'm concerned with is handling downtime for the server (if at all) and internet loss on the client side.
With that in mind, I have the following questions on an implementation:
What is the standard technique for caching values in a SQLite Database for Android while still trying to constantly receive data from the web-application.
How can I be sure that I have the most up-to-date information when that information is available.
How would I wrap up this logic (of determining which one to pull from and whether or not the data is recent) into a ContentProvider?
Is caching the data even that good of an idea? Or should I simply assume that if the user isn't connected to the internet, then the information isn't readily available.
Good news! There's a android construct built just for doing this kind of stuff, its called a SyncAdapter. This is how all the google apps do database syncing. Plus there's a great google IO video all about using it! It's actually one of my favorites. It gives you a nice really high level overview of how to go about keeping remote resources synced with your server using something called REST.

Categories

Resources