I have a desktop application for managing restaurants front-of-house operations such as reservations, guest data, table turnover, with support for online reservations.
The problem that I am trying to solve is how to capture customer spend and table state by integrating into MICROS. I would like to find out when a table is busy, when a check is printed, what is the total value of the check paid by customer.
Any help in how or where to start would be appreciated. The MICROS website is quite vague as to what can be done.
-Thanks
One way to track this information is to create a polling application that runs on that Micros sever. You would need read access on the database, and in the best case scenario full dba access. The schema is quite complicated, but if you Google something like "micros pos 3700 schema pdf" you'll come across some resources to get you going. Also, check out http://www.tek-tips.com/ and do some searching for Micros if you go this route. There are examples of SQL and other users who have faced the same task of integrating with Micros. You can query things like open checks, and when a check was closed. That may give you an idea of when it was printed if you cannot find that out specifically.
I have never used MICROS specifically but I have integrated with many systems before and I generally find that if you call them and tell them you want to integrate they will usually be willing to tell you where their data is stored, also using their software for purposes other than what they intended could be copyright infringement unless you ask; also you would unofficially be a data processor for MICROS then and you don't want to get sued, so its probably best to ask.
Generally speaking though you can probably find the data you want just by performing a single action before the you open so as not to confuse matters and looking through the files in the install directory until you find information on the action you just performed, take note and repeat for each action. Then you can watch the directory for changes and if the file is one of the ones you care about then process it. The best ones are often logs as they are usually plaintext, updated realtime, easy to access and you can usually pick out the patterns you want quite easily.
You do need to keep in mind though that some data may only be outputted at the end of the day or transaction in a format you can use so again I really recommend calling and asking.
Related
We would like to start using salesforce for managing sales contacts, but there is also some business functions regarding contacts that we would like to retain in our current system.
As far as I can see, that means that we're going to need a two-way sync? Ie, when anything changes on salesforce, we need to update it on our system and vice versa.
I'm suggesting some kind of messaging product that can sit in the middle and retry failed messages, because I have a feeling that without that, things are going to get very messy? Eg, when one or other service is down.
The manager on the project would like to keep it simple and feels that using messages rather then realtime point-to-point calls is overkill, but I feel like without it we're going to be in for a world of pain.
Does anyone have any experience with trying to do two-way syncs (actually even one-way suffers from the same risks I think?)
Many thanks for your insights..
I can't speak for your system, but on the side Salesforce API, take a look at the getUpdated() and getDeleted() calls, which are designed for data replication. The SOAP API doc has a section that goes into detail about how to use them effectively.
We use Jitterbit to achieve two way sync between Salesforce and billing system. The Salesforce has a last-modified field and so does our biling system (you system should have this, if not, add a timestamp field to the table in its SQL storage). The only important thing is to chose one of the keys as primary (either SF_ID or other system's key) and create that key field in another system as it will be used for conflict resolution. The process is simple and multistep, load all modified SF data into flat file, load all modified secondary system data into another flat file, look for conflicts by comparing two files over a common key field, notify admin on conflicts, if any, and propagate all non-conflicting changes to another system. We run this process every 10 minutes and we store the last timestamp on both systems between cycle runs so that we only take records that were modified between two cycles.
In case two users edit at the same time, you will either encounter a confict and resolve it manually or you will get the "last-saved-wins" outcome.
You also have to cater for new provisions, on SF side use upsert instead of update (using external or SF key depending on which you chose above), on your other side it depends on the system.
We have a Java based system with postgres as database. For some reasons we want to propagate certain changes on timely basis (say 1 hour) to a different location. The two broad approaches are
Logging all the changes to a file as and when that happens. However
this approach will scatter the code everywhere.
Somehow find the incremental changes in postgres between two time stamps in
some log files and send that. However I am not sure how feasible is this
approach.
Anyone has any thoughts/ideas around this?
Provided that the database size is not very great, you could do it quick&dirt by just:
Dumping the entire postgresql to a textfile.
(If the dump file is not sorted *1) sorting the textfile.
Create a diff file with the previous dump file.
Of course, I would only advice this for a situation where your database is going to be kept relatively small and you are just going to use it for a couple of servers.
*1: I do not know if it is somehow sorted, check the docs.
There are a few different options available:
Depending on the amount of data being written you could give Bucardo a try.
Otherwise it is also possible to do something with PgQ in combination with Londiste
Or create something yourself by using triggers so you can generate some kind of audit table
There are many pre-packaged approaches, so you probably don't need to develop your own. Many of the options are summarized and compared on this Wiki page:
http://wiki.postgresql.org/wiki/Replication,_Clustering,_and_Connection_Pooling
Many of them are based on the use of triggers to capture the data, with automatic generation of the triggers based on a more user-friendly interface.
Instead of writing your own solution, I would advise to leverage work already done by others. And in the case you described I would go for PgQ + Londiste (both part of Skytools package), that are easy to set up and use. If you do not want streaming replication, you could still use PgQ / Londiste to easily capture DMLs and write them to a file that you can load when needed. This would allow you expand your setup / processing when new requirements come.
If I want to build a movie website similar to IMDB.
I can have the website up and running.
from the day I launch I can maintain the data upto date,
but my question, in my mind is How can i think of making the old data available say from 1900 to 2010
This is the challenge that I am facing, can any one share the knowledge how to go about this ?
Which Strategy I can follow to make a any website to make it to have old and as well as the on going News
Say the technology I can think to develop this website would be java, mysql, php
How can i think of making the old data available say from 1900 to 2010 *
Means : I would like to upload the All the movie data which is very old
I don't see the problem here. Just store the data for all movies in the database, and provide a way for the user to:
qualify his/her searches with a date range, and
order the results in date order.
I would like to upload the All the movie data which is very old
And is there any reason that you can't do this? (Apart from the obvious one of not having the data in the first place ...)
FOLLOWUP
It sounds like the problem you are worrying about is one of incomplete or bad data. There is no simple solution to that. You probably need to devise a strategy for dealing with it; e.g.
upload everything, and then do batch validation / cleanup runs on the live database
as above, but hide records until have passed validation
validate each record as you attempt to upload, and put those that fail validation on one side.
You also need to be able to:
identify common validation problems, and perform bulk corrections,
change (e.g. tighten) the validation rules on the fly and revalidate.
But this is all standard stuff for a large data-oriented application. (And there were easy solutions, everyone and their granny would be scraping the internet and building databases. TANSTAAFL. )
what kind of logging frame work or API to use for swing applications which is used by multiple users in Unix.
Is it possible to log all verbose/exception in one file per day or event one user one file per day? Since the user can open the same application with multiple instance.
I also have another solution is to save the exceptions into database. But if I miss the excetpions, those will not be saved in DB.
anybody has better solutions? Thank you very much!
You might like this article and discussion. The author mentions java.util.logging, which is discussed more extensively in this Java Logging Overview. In the context you describe, FileHandler should be able to sort out multiple instances per user without requiring a database.
If you are distributing your software across a network then you have less chances of knowing each and every event user does. Not sure If log4j or any other framework helps to track every user actions in your situation. Unless if you have something running on your app server.
Well..If I were you I will do it this way.
For exceptional conditions:
Come up good solid exceptional framework(something like assigning a unique Id for each exception).
In case of exception condition catch it and write the full stack trace to database table with the same unique id.
Come up some kind of search tool (web application) which helps you see what went wrong during user actions.
For Normal tracking I probably save user actions into table, but it hurts performance unless you come up with good framework. Not sure If I answered your questions. Please let me know if you have something to say.
-padur
Saving to database seems a good idea, something like when user logs in to your swing app. Create a file in user temp directory write all his actions/exceptions etc etc into the file and when he log out read the file and save it into database.Wells there are several ways to track user actions, this is one among them.
I need to create project in which there are two databases local and remote. Remote database needs to be synchronized daily with local database reflecting changes made in local database.
I am using JAVA. Database is ORACLE. I have JAVA/JPA code that does CRUD operations on local database.
How to synchronize changes to remote database.
I would not do this in Java, but look for native Oracle database synchronization mechanisms/tools. This will
be quicker to implement
be more robust
have faster replication events
be more 'correct'
Please look at some synchronization products. SQL Anywhere from Sybase where I work is one such product. You may be able to get a developer/evaluation copy that you can use to explore your options. I am sure Oracle has something similar too.
The basic idea is to be able to track the changes that have happened in the central database. This is typically done by keeping a timestamp for each row. During the synchronization, the remote database provides the last sync time and the server sends to it all rows that have changed since then. Note that the rows that have been deleted in the central database will need some special handling to ensure they get deleted from the remote database.
A true two-way synchronization is lot more complex. You need to also upload the changes from remote database to central and also some conflict resolution strategies have to be implemented for the cases when the same row has been changed in both the remote and central database in incompatible way in the two.
The general problem is too complex to be explained in a respone here but I hope I have been able to provide some useful pointers.
The problem is that what you are asking can range from moderately difficult (for a simple, not very robust system) to a very complex product that could keep a small team busy for a year depending on requirements.
That's why the other answers said "Find another way" (basically)
If you have to do this for a class assignment or something, it's possible but it probably won't be quick, robust or easy.
You need server software on each side, a way to translate unknown tables to data that can be transferred over the wire (along with enough meta-data to re-create it on the other side) and you'll probably want to track database changes (perhaps with a flag or timestamp) so that you don't have to send each record over every time.
It's a hard enough problem that we can't really help much. If I HAD to do that for a customer, I'd quote him at least a man year of work to get it even moderately reliable.
Good Luck
Oracle has a sophistication replication functionality to synchronise databases. Find out more..
From your comments it appears you're using the Oracle Lite: this supports replication, which is covered in the Lite documentation.
Never worked with it, but http://symmetricds.codehaus.org/ might be of use