I am attempting to build off of the App Engine MSB. The client it comes with handles a simple public single-thread messaging application through a general backend. I want to extend the client to build a "real" messaging app. By that I mean that I would like a user to be able to have multiple private conversations.
I'm new to Google Cloudstore, but through discussions I thought having a message entity for each message made sense, and then setting the ancestor of the message to be the conversation it belongs to would make sense. And then a user would have a list of referenceProperties to the conversations it was involved in.
As I look at the Mobile Backend Starter I see that the android client doesn't actually create any entities itself. It uses a shell class "CloudEntity" to hold data and then pass it over the Google Endpoints api to the general server code which will built an Entity and insert it in the cloud store. (with intermediary steps through EntityDto?) My understanding is that the ancestor key needs to be available at the time the Entity is created, as it becomes part of the key for the entity, so if there was a way I could modify the android client code for CloudEntity to handle the ancestor that would be great.
I've looked at code for the client and the server and it's a lot to take in, so I was hoping I could get some assistance:
1) Is it possible with the MBS client code to set ancestors for entities?
2) Is it even desirable to want to use ancestors in this way?
This is my first question, so thank you for your time and patience.
Related
I would like to design the best architecture for my following project: I have an application running on any device (desktop, mobile...) where users can publish or receive notifications with other users they share data.
Basically, a user can share with other users what he is doing on the application, other users being notified in real-time of the changes and vice-versa. And users are only able to receive notifications they are allowed by other users.
For example, when a user moves a widget on the screen, the application must store the new widget position, and also notify in real-time other users of this new position to perform the change on their screen. For this need, I would see an event-driven architecture with a publish-subscribe pattern. However, I guess I would also need to handle sync request-response pattern when the application needs to retrieve the list of users to share a widget for example.
I had a quick look at Streaming Data book by Manning where a streaming data architecture is described, but I don't know if this kind of architecture would fit my needs. One difference for example in the implementation part is that the event source producer can also be an event consumer in my application (in the book, the event source producer is a separate public streaming API and the real application is the only consumer)
My idea if I follow a bit the book would be the following: WebSocket for data ingestion and data access, a broker-like Kafka as message repository and a separate analysis service consuming Kafka topics and persisting data in DB. One interrogation is if I could use only one WebSocket for both data ingestion and data access.
Which detailed architecture and tools would you use to fit these needs?
For the implementation, I would consider javascript for the client part, and Java for the server part.
This is a pretty common use case for Kafka (leveraging both the broadcast and storage elements). There are some examples here which should help, although the context is slightly different:
https://github.com/confluentinc/kafka-streams-examples/tree/4.0.0-post/src/main/java/io/confluent/examples/streams/microservices
https://www.confluent.io/blog/building-a-microservices-ecosystem-with-kafka-streams-and-ksql/
In this example the CQRS pattern is used, so changes you make to the screen position would create events sent to kafka, then you create a view service that other application instances can (long) poll to get changes.
You could also implement this with a websocket. There are a few implementations of this on github but I haven't tried any of them personally. The one complexity is that, if you want to scale out to many nodes, you need some way to map messages in Kafka to open websockets (whereas mapping requests to kafka partitions in the REST example is handled automatically). Getting started with a single server implementation wouldn't require this complexity though.
We're designing an architecture for communicating several applications and we have decided to use Mirth as (pseudo)ESB. In our processes we want to give back control to users as soon as we can, so when an action is fired by an user (for example, pressing Save button after filling in a form) some (necessary) changes are made in database and then a message has to be sent to another system. User doesnt have to wait until message is sent, so our applications gives back control when database changes are done. Message composition is done in background asynchronously. But we donĀ“t really know which approach we should follow:
a) Start a new thread in our app where we collect all necessary data (starting from "primary data", this is, some primary keys that allow us to find all information) to fill an HL7 message and send it to queue where Mirth is listening.
b) Send "primary data" to Mirth and delegate HL7 message composition to it.Mirth can access directly to database to collect necessary data or another option could be invoking some REST/SOAP services of our own.
In case of option B, we have some doubts about how to invoke Mirth:
b.1) Our app makes database modifications and writes primary data on a queue (distributed transaction).
b.2) Our app makes database modifications and call a SOAP or Rest service published by Mirth which all it does is writing message on a queue where Mirth is also reading (no distributed transaction in our app).
Some argue that composing message in our app and using Mirth only as a broker is "missusing" Mirth. On the other side, there is some mates that find accessing app database from Mirth is very intrusive and it should not know our schema. Last option, invoking an app service from Mirth which returns all necessary information for HL7 is like sending "primary data" from app to Mirth only to get it back when Mirth calls service (passing that data as a parameter).
Thank you for your advices.
I'm not sure if Mirth is the appropriate tool to use as an Enterprise Service Bus where your requirements include real time notifications/events to allow the user to proceed after submitting a form.
Without knowing more, such as the architecture in play, we can't really advise you.
IMO, as one who experienced with Mirth integration, as well as designing database dependent applications, I would say that Mirth isn't the appropriate tool for the job.
(1) There is not enough information for an "expert advice" and no single clear technically-justified answer
(2) Option (a) looks like least expensive and easiest to implement for the 1st version, especially with reuse of stable tested libraries like HAPI
(3) In your design treat your Enterprise service bus as a black box component and concentrate on designing the interfaces and clarifying the asynchronous message sequences. This way the service bus internals, the message routing and queuing decisions can be postponed to the deployment time with some coding effort and by following the adapter design pattern
(4) Arguments worded like "missusing", "intrusive", "like it", "nice" perhaps indicate a valid point of view but as such do not create a measurable, verifiable decision criteria or performance indicators and should not be used alone
(5) This is the right time to apply a decision making process and weight-evaluate the various options. As a minimal formal input I'd recommend the Plus/Minus/Interesting
(6) In your decision following points should not be ommited:
securing data privacy (health state is a private property protected by law in some countries)
fault tolerance (robustness, reliability, exception handling)
maintenance costs (do you have qualified people to maintain it, can the solution monitor and auto-correct itself or someone will have to review millions of lines of logs manually)
development costs (do you have qualified people already, how many lines of code can you reuse vs. how many will you have to create/debug)
(7) I'm sorry that my answer is not directly helpful, my choice would be to compose the message in a reliable secured application server, whatever that means in this case and regardless of how it's axons or pseudopods would be connected
Last but not the least: record the why you made the choice - forever, so that you can test and validate your assumptions any time later when the original decision makers get lost in the sands of time
I am looking to implement a way to transfer data from one application to another programmatically in Google app engine.
I know there is a way to achieve this with database_admin console but that process is very time inefficient.
I am currently implementing this with the use of Google Cloud Storage(GCS) but that involves querying data, saving it to GCS and then reading from GCS from different app and restoring it.
Please let me know if anyone knows a simpler way of transferring data between two applications programmatically.
Thanks!
Haven't tried this myself but it sounds like it should work: Use the data_store admin to backup your objects to GCS from one app, then use your other app to restore that file from GCS. This should be a good method if you only require a one time sync.
If you need to constantly replicate data from one app to another, introducing REST endpoints at one or both sides could help:
https://code.google.com/p/appengine-rest-server/ (this is in Python, I know, but just define a version of your app for the REST endpoint)
You just need to make sure your model definitions match on both sides (pretty much update the app at both sides with the same deployment code) and only have the side that needs to sync data track time of last sync and use the REST endpoints to pull in new data. Cron Jobs can do this.
Alternatively, create a PostPut callback on all of your models to make a POST call every time a model is written to your datastore to the proper REST endpoint on the other app.
You can batch update with one method, or keep a constantly updated version with the other method (at the expense of more calls).
Are you trying to move data between two App Engine applications or trying to export all your data from App Engine so you can move to a different hosting system? Your question doesn't have enough information to understand what you're attempting to do. Based on the vague requirements I would say typically this would be handled with a web service that you write in one application that exposes the data and the other application calls that service to consume the data. I'm not sure why Cloud Endpoints was down voted because that provides a nice way to expose your data as a JSON based web service with a minimum of coding fuss.
I'd recommend adding some more details into your question like exactly what are you trying to accomplish and maybe a mock data sample.
You could create a backup of your data using bulkloader and then restore it on another application.
Details here:
https://developers.google.com/appengine/docs/python/tools/uploadingdata?csw=1#Python_Downloading_and_uploading_all_data
Refer to this post if you are using Java:
Downloading Google App Engine Database
I don't know if this could be suitable for your concrete scenario, but Google Cloud Endpoints are definitely a simple way of transferring data programmatically from Google App Engine.
This is kind of the Google implementation of REST web services, so they allow you to share resources using URLs. This is still an experimental technology, but as long as I've worked with them, they work perfectly. Moreover they are very well integrated with GAE and the Google Plugin for Eclipse.
You can automatically generate an endpoint from a JDO persistent class and I think you can automatically generate the client libraries as well (although I didn't try that).
I have a project. In the first project I set the session
in my first project I put here as code
req.getSession().setAttribute("x", name);
return "ses";
In second project I put here
model.addAttribute("ses", req.getSession().getAttribute("x"));
return "oses";
but session is not appear.
How to make a session appear in different project with Spring framework?
You can't. (Well, perhaps you can setup some sort of session-replication, but you shouldn't do it. See related question)
You should use other forms of communication between your applications. The flow will be more complicated and will include exchange of tokens through (simple) web services, but it is better than relying on the server container, and on the fact that both applications will be run in the same container.
It'd be helpful to describe what you're actually trying to accomplish; as Bozho says you can't really share session objects between apps.
You could, however, use JMS (or any other intra-app comms) to send data from one app to another. You'll still need the capability to decide what to do with that data once you have it in the receiving app: how do I associate it with a given user, how do I get it into that user's session, and so on.
User information can be passed in the message, but there has to be some commonality between the two systems, some agreed-upon key, that can be used to figure out who the info belongs to.
Once you have that, the rest is mechanics; there are interesting games to be played, and it's easy to mess it up :)
Let me exaplain you the complete situation currently I am stuck with in.
We are developing very much complex application in GWT and Hibernate, we are trying to host client and server code on different servers because of client's requirement. Now, I am able to achieve so using JNDI.
Here comes the tricky part, client need to have that application on different Platform also, database would be same and methods would be the same, lets say iPhone / .Net version of our application. we don't want to generate Server code again because it's gonna be the same for all.
I have tried for WebServices wrapper on the top of my server code but because of complexity of architecture and Classes dependencies I am not able to do so. For example, Lets consider below code.
class Document {
List<User>;
List<AccessLevels>;
}
Document class have list of users, list of accesslevels and lot more list of other classes and that other classes have more lists. Some important server methods takes Class (Document or any other) as input and return some other class in output. And we shouldn't use complex architecture in WebServices.
So, I need to stick with JNDI. Now, I don't know how can I access JNDI call to any other application ???
Please suggest ways to overcome this situation. I am open for technology changes that means JNDI / WebServices or any other technology that servers me well.
Thanking You,
Regards,
I have never seen JNDI used as a mechanism for request/response inter-process communication. I don't believe that this will be a productive line of attack.
You believe that Web Services are inappropriate when the payloads are complex. I disagree, I have seen many successful projects using quite large payloads, with many nested classes. Trivial example: Customers with Orders with Order Lines with Products with ... and so on.
It is clearly desirable to keep payload sizes small, there are serialization and network costs, big objects will be more expensive. But it's by far preferable to have one big request than lot's of little one. A "busy" interface will not perform well across a network.
I suspect that the one problem you may have is that certain of the server-side classes are not pure data, they refer to classes that only make sense on the server, you don't want those classes in you client.
I this case you need to build an "adapter" layer. This is dull work, but no matter what Inter-process communication technique you use you will need to do it. You need what I refer to as Data Transfer Objects (DTOs) - these represent payloads that are understood in client, using only classes reasonable for the client, and which the server can consume and create.
Lets suppose that you use technology XXX (JNDI, Web Service, direct socket call, JMS)
Client --- sends Document DTO --XXX---> Adapter transform DTO to server's Document
and similarly in reverse. My claim is that no matter what XXX is chosen you have the same problem, you need the client to work with "cut-down" objects that reveal none of the server's implementation details.
The adapter has responsibility for creating and understanding DTOs.
I find that working with RESTful Web Services using JAX/RS is very easy once you have a set of DTOs it's the work of minutes to create Web Services.