We would like to start using salesforce for managing sales contacts, but there is also some business functions regarding contacts that we would like to retain in our current system.
As far as I can see, that means that we're going to need a two-way sync? Ie, when anything changes on salesforce, we need to update it on our system and vice versa.
I'm suggesting some kind of messaging product that can sit in the middle and retry failed messages, because I have a feeling that without that, things are going to get very messy? Eg, when one or other service is down.
The manager on the project would like to keep it simple and feels that using messages rather then realtime point-to-point calls is overkill, but I feel like without it we're going to be in for a world of pain.
Does anyone have any experience with trying to do two-way syncs (actually even one-way suffers from the same risks I think?)
Many thanks for your insights..
I can't speak for your system, but on the side Salesforce API, take a look at the getUpdated() and getDeleted() calls, which are designed for data replication. The SOAP API doc has a section that goes into detail about how to use them effectively.
We use Jitterbit to achieve two way sync between Salesforce and billing system. The Salesforce has a last-modified field and so does our biling system (you system should have this, if not, add a timestamp field to the table in its SQL storage). The only important thing is to chose one of the keys as primary (either SF_ID or other system's key) and create that key field in another system as it will be used for conflict resolution. The process is simple and multistep, load all modified SF data into flat file, load all modified secondary system data into another flat file, look for conflicts by comparing two files over a common key field, notify admin on conflicts, if any, and propagate all non-conflicting changes to another system. We run this process every 10 minutes and we store the last timestamp on both systems between cycle runs so that we only take records that were modified between two cycles.
In case two users edit at the same time, you will either encounter a confict and resolve it manually or you will get the "last-saved-wins" outcome.
You also have to cater for new provisions, on SF side use upsert instead of update (using external or SF key depending on which you chose above), on your other side it depends on the system.
Related
Our framework is Grails. Say domain.com contains an application and currently used by some client. If we want to allow another client with the same functionality but providing a separation for the data of two clients, so that they can't mix both, how to do this? And whenever we want to add n clients to this application, what is the best method to be followed, so that with less / no configuration we can share the common war file for these clients by separating the db.
How the real time web development handle these type of situations?
And, one more point is how to provide client1.domain.com works for client1 and client2.domain.com works for client2. How to make the war file (in Java / Grails) to work like this? Otherwise we have to programmatically control the clients with in the project for every feature to be allowed or unnecessarily maintain separate war file for each client, which will be a waste of resources.
You're describing multitenancy - create one table for N 'tenants' instead of N identical (or nearly) tables, but partition it with a tenant_id column, and use that to filter results in SQL WHERE clauses.
For example the generated code for findByUsername would be something like select * from person where username='foo' and tenant_id=3' - the same code as a regular call but with the tenant_id column to restrict within that tenant's data.
Note that previously simple things like unique constraints are now harder because you would want to restrict uniqueness within a tenant, but allow a value to be reused across tenants. In this case changing the unique constraint to be on the combo of username and tenant_id works and does the heavy lifting in the database.
For a while there were several related plugins, but they relied on tweaking internal APIs and some features broke in newer Hibernate versions. But I believe that http://grails.org/plugin/multi-tenant-single-db is active; it was updated over a year ago, but it is being used. Contact the authors if it looks like it'll be what you need to be sure it's active. Note that this can only work with Hibernate 3.x.
Hibernate 4 added support for multitenancy, but I haven't heard much about its use in Grails (which is expected, since it's not that common a requirement). It's not well documented, but this bug report highlights some of the potential pitfalls and should still be a working example (the test app is still on GitHub): https://jira.grails.org/browse/GPHIB-6.
I'd like to ensure that this is working and continues to work, so please let me know via email if you have issues later. It's a great feature and having it in Hibernate core makes things a lot easier for us. But we need to make it easy to use and well-documented, and that will happen a lot faster when it's being used in a real project.
What is the best way to keep a log of user changes in my web application (java/tomcat/struts/mysql)? I give out accounts and each account has multiple users. I want the account administrators to be able to see who did what at any given time. And I'd like to be able to access ALL of it. First, I need a way to know which fields have been changed, then I need to log the changes for each account in a place where they can see them. Obviously, I don't want to slow the app down. I read an answer on this site suggesting keeping a db log - querying the database for changes after each query is sent. Wasn't sure how to do that.
This depends on the nature of your web application. Let's assume your web application is a e-commerce system and it allows the user to add new product, or updating an existing product. When a user perform a specific action like adding a new product, the basic goal is to capture his user name, action and time stamp. Same for updating a product, you might want to keep track what values he updated, what was the old value and when did he change that.
To achieve this, firstly you need to
Create an audit table
Obviously you want to keep track the last modified person, timestamp, created by and etc.
Create a logging mechanism whenever some changes/actions performed.
There are few ways to do this, you can either do it via application or leave everything to database trigger. I would suggest to use triggers to detect any Create/Update/Delete event in the database, and ask the trigger to capture the details and write to the Audit table. I think this is the cleanest and less maintenance way. However, if you want to log using application, you have to make code changes, create new methods to capture the details to the Audit table in your action classes.
More information on MYSQL Trigger here
I was looking on a similar "Method" to log the transactions and other stuffs in my web app. Just while browsing Google, i found this link:
https://www.owasp.org/index.php/Logging_Cheat_Sheet telling about two possible ways to log: Either on database or on filesystem at some log files...
When using the file system, it is preferable to use a separate
partition than those used by the operating system, other application
files and user generated content For file-based logs, apply strict
permissions concerning which users can access the directories, and the
permissions of files within the directories In web applications, the
logs should not be exposed in web-accessible locations, and if done
so, should have restricted access and be configured with a plain text
MIME type (not HTML) When using a database, it is preferable to
utilize a separate database account that is only used for writing log
data and which has very restrictive database , table, function and
command permissions Use standard formats over secure protocols to
record and send event data, or log files, to other systems e.g. Common
Log File System (CLFS), Common Event Format (CEF) over syslog,
possibly Common Event Expression (CEE) in future; standard formats
facilitate integration with centralised logging services
They've beautifully explained the possible ways we can log, what should be logged, what to be avoided too.
Hope it's useful to you.
An existing external system makes regular (every few seconds) updates to several database tables. We want to build a dashboard type user interface which allows the user to view additional records and important updates in near real-time. The user interface would also allow some transactions which would result in database changes.
Our thoughts are to use a stack with Hibernate and Flex (see http://dl.dropbox.com/u/1431390/overview.jpg) but we are open to using any free/open source technology. There are a few issues we are unsure about should we use our proposed stack:
1) How to automatically update the POJOs with database changes? As far as I understand it, there is no way of hibernate knowing about any changes made outside its own session. Therefore, some sort of polling would have to be done to pick up new and changed records.
2) We were planning to push the data to datagrids within a flex UI (using BlazeDS or WebORB). This seems to rely on identifying the changes and pushing these as updates down the channel. However, if we use the Hibernate->POJO approach identifying these changes could be fairly complex as we have refreshed the data. Is there a better solution which will push the changes on the fly? I would have thought this was a common requirement but I can't find much information online.
Any advice would be gratefully appreciated on either the architecture or the specific issues.
Many thanks,
Ken
For 1) - Use polling or if you have enough budget use a database that supports pushing JMS messages from triggers (DB2, Oracle, MSSql server).
For 2) - There is a commercial product built by Adobe which can solve this problem easier (it has this feature that you are looking for). It has a steep learning curve and is targeted for enterprise. Otherwise you will have to implement your own solution - refresh only the changed data etc.
I am in the process of creating a UI configuration tool for my pet project. One aspect of this tool lets the end user DEFINE his orchestration. I then need to save this orchestration definition into a database. There will be a executable version of this definition in a running system. The executable version is created dynamically on-demand.
Idea is to separate the DEFINITION from EXECUTABLE version so that I have the flexibility to choose the runtime version among BPMN or JPDL or a POJO based workflow solution (BeanFlow).
Limitation: I can't use the BPMN editors that come with frameworks like jBPM, Activiti etc as I wan't to use my own UI that is specific to my domain.
I need suggestions on HOW to PERSIST the definition.
Should I use rdbms tables? If so, is there a db schema I can borrow that is close to orchestration concepts?
Should I serialize my definition to BPMN/JPDL XML instance document?
Are there any other simple formats that I can use?
By "orchestration" I'm assuming you mean a finite state machine. Where the current state dictates what transitions can be followed to other states. The representation of states and transitions as edges and vertices often produces a directed acyclic graph, however there are times when the graph will cycle (e.g. draft -- submit for approval --> pending approval -- reject --> draft).
In practice, separating the definition from execution calls for a persistence format that can easily accommodate customization. As your system evolves you will find a number of unanticipated edge cases whose solution should not require altering a persistence schema, only code. This implies XML or a NoSQL solution - something whose schema is easily changed or non existent.
Now, having written my own XML definition for this purpose (for uninteresting reasons I'll exclude), my suggestion is using JPDL (or BPMN). Reason is their definitions likely incorporate whatever you're considering now, will in the future, and enable customization - such as hanging arbitrary data or behavior off them at a given point. You also get the advantage of tools already built - not just UI - for dealing with cycle detection and ensuring there is a path to completion for example.
Some of the interesting features I know JPDL possesses are an ability to help merge forked processes, timed tasks (including those that repeat periodically), and facilities for sending notification. This last item - notification - bears some further exposition. One of the things I've found with my own system is the need for sending out configurable email whose content is based on the data flowing through. These existing engines make that relatively easy by providing a way to plugin variables for instance into text that's then dynamically evaluated at run time before transmission. Also they provide bridges between the engine and whatever user store for the purpose of sending notifications to groups of people, tasking them and enforcing security policy.
Finally, depending on the scope of your system, you will probably still be using a database as well. What I suggest is storing off the XML and data being orchestrated into the database in a serialized format. Then, if the data is being altered as it travels through the execution, write out serializations of the data - and perhaps workflow if it is also changed - into a history/audit log table as well.
I would NOT use rdbms tables, or if you do, store the definitions as text blobs. Trying to make records for the definition is a bad idea because it's much more inflexible and difficult to change your definition over time. Many people would use different approaches, but I'd use JSON or YAML, and avoid XML. The motivation for that is to make it as simple as possible. Trying to use XML, especially a formalized specific format of XML is going to make you spend much more time meeting an exact specification that doesn't actually do anything to help what you're trying to accomplish. JSON and YAML are both very easy to work with from a code perspective. YAML is more easily readable by humans and easier to edit, and isn't as tricky for punctuation and escaping as JSON. JSON is more widely used, and is smaller than YAML. JSON also has a binary counterpart, BSON, if document size is a concern.
Once you have an importer/exporter that goes to/from your internal objects to your data format, then persisting using RDBMS, or other mechanisms, will be straightforward. You could even use CouchDB, which could offer other benefits to your application and may be a great fit.
Very good question! Here is my two cents:
RDBMS: if you do this you will be able to query the workflow instances, for example which tokens are at 'node X'?
Storing XML as clob: the simplicity is the truth of this solution, but you can't really query these just get them by id
NOSQL: there are a lot of different solutions for different problems. MongoDB is a popular solution, it provides document oriented persistence.
How about a simple serialisation of the composed UI using for example XStream and then store the serialised bits into the database as a binary column. Then when user logs in, get the associated data, deserialise, initialise if required and display.
I am not very familiar with databases and what they offer outside of the CRUD operations.
My research has led me to triggers. Basically it looks like triggers offer this type of functionality:
(from Wikipedia)
There are typically three triggering events that cause triggers to "fire":
INSERT event (as a new record is being inserted into the database).
UPDATE event (as a record is being changed).
DELETE event (as a record is being deleted).
My question is: is there some way I can be notified in Java (preferably including the data that changed) by the database when a record is Updated/Deleted/Inserted using some sort of trigger semantics?
What might be some alternate solutions to this problem? How can I listen to database events?
The main reason I want to do this is a scenario like this:
I have 5 client applications all in different processes/existing across different PCs. They all share a common database (Postgres in this case).
Lets say one client changes a record in the DB that all 5 of the clients are "interested" in. I am trying to think of ways for the clients to be "notified" of the change (preferably with the affected data attached) instead of them querying for the data at some interval.
Using Oracle you can setup a Trigger on a table and then have the trigger send a JMS message. Oracle has two different JMS implementations. You can then have a process that will 'listen' for the message using the JDBC Driver. I have used this method to push changes out to my application vs. polling.
If you are using a Java database (H2) you have additional options. In my current application (SIEM) I have triggers in H2 that publish change events using JMX.
Don't mix up the database (which contains the data), and events on that data.
Triggers are one way, but normally you will have a persistence layer in your application. This layer can choose to fire off events when certain things happen - say to a JMS topic.
Triggers are a last ditch thing, as you're operating on relational items then, rather than "events" on the data. (For example, an "update", could in reality map to a "company changed legal name" event) If you rely on the db, you'll have to map the inserts & updates back to real life events.... which you already knew about!
You can then layer other stuff on top of these notifications - like event stream processing - to find events that others are interested in.
James
Hmm. So you're using PostgreSQL and you want to "listen" for events and be "notified" when they occur?
http://www.postgresql.org/docs/8.3/static/sql-listen.html
http://www.postgresql.org/docs/8.3/static/sql-notify.html
Hope this helps!
Calling external processes from the database is very vendor specific.
Just off the top of my head:
SQLServer can call CLR programs from
triggers,
postgresql can call arbitrary C
functions loaded dynamically,
MySQL can call arbitrary C functions,
but they must be compiled in,
Sybase can make system calls if set
up to do so.
The simplest thing to do is to have the insert/update/delete triggers make an entry in some log table, and have your java program monitor that table. Good columns to have in your log table would be things like EVENT_CODE, LOG_DATETIME, and LOG_MSG.
Unless you require very high performance or need to handle 100Ks of records, that is probably sufficient.
I think you're confusing two things. They are both highly db vendor specific.
The first I shall call "triggers". I am sure there is at least one DB vendor who thinks triggers are different than this, but bear with me. A trigger is a server-side piece of code that can be attached to table. For instance, you could run a PSQL stored procedure on every update in table X. Some databases allow you to write these in real programming languages, others only in their variant of SQL. Triggers are typically reasonably fast and scalable.
The other I shall call "events". These are triggers that fire in the database that allow you to define an event handler in your client program. IE, any time there are updates to the clients database, fire updateClientsList in your program. For instance, using python and firebird see http://www.firebirdsql.org/devel/python/docs/3.3.0/beyond-python-db-api.html#database-event-notification
I believe the previous suggestion to use a monitor is an equivalent way to implement this using some other database. Maybe oracle? MSSQL Notification services, mentioned in another answer is another implementation of this as well.
I would go so far as to say you'd better REALLY know why you want the database to notify your client program, otherwise you should stick with server side triggers.
What you're asking completely depends on both the database you're using and the framework you're using to communicate with your database.
If you're using something like Hibernate as your persistence layer, it has a set of listeners and interceptors that you can use to monitor records going in and out of the database.
There are a few different techniques here depending on the database you're using. One idea is to poll the database (which I'm sure you're trying to avoid). Basically you could check for changes every so often.
Another solution (if you're using SQL Server 2005) is to use Notification Services, although this techonology is supposedly being replaced in SQL 2008 (we haven't seen a pure replacement yet, but Microsoft has talked about it publicly).
This is usually what the standard client/server application is for. If all inserts/updates/deletes go through the server application, which then modifies the database, then client applications can find out much easier what changes were made.
If you are using postgresql it has capability to listen notifications from JDBC client.
I would suggest using a timestamp column, last updated, together with possibly the user updating the record, and then let the clients check their local record timestamp against that of the persisted record.
The added complexity of adding a callback/trigger functionality is just not worth it in my opinion, unless supported by the database backend and the client library used, like for instance the notification services offered for SQL Server 2005 used together with ADO.NET.