Object Building across Micro services [closed] - java

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
We have three micro services MSA, MSB and MSC. The micro service MSA creates a partial object O1 and sends to MSB only through a dedicated message topic. After receiving the partial object O1 from MSA, MSB populates few more attributes in O1 and shares in the common message bus from which MSC consumes the object O1.
Question is that, is this a good approach where the object building is shared across multiple micro services?

Here it's your response:
In object-oriented programming, a God object is an object that knows too much or does too much. The God object is an example of an anti-pattern.
A common programming technique is to separate a large problem into several smaller problems (a divide and conquer strategy) and create solutions for each of them. Once the smaller problems are solved, the big problem as a whole has been solved. Therefore a given object for a small problem need only know about itself. Likewise, there is only one set of problems an object needs to solve: its own problems.

So you have a microservice Ordering and a microservice Pricing. Both of the microservices need information about the Product entity.
You should ask yourself:
Do those two different worlds realize the Product entity in the same way? Both of them need the same information?
Will the product information change for the same reasons for both of the microservices?
If no (which is likely the case), you have to add an abstract layer between them, so that you are sure that they use the same language.
If yes, you can keep on sharing the same object.
By the way, these concerns that you have is not a new thing.
Here is Martin Fowler's article about bounded contexts
So instead DDD divides up a large system into Bounded Contexts, each
of which can have a unified model - essentially a way of structuring
MultipleCanonicalModels.
keywords for further research: DDD, context map, bounded context, anticorruption layer.

Related

Microservice design : one single call or two separate APIs? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I have a monolithic app that does the following logics:
Get a list A (Customer) from database
Validate data in A using some criteria, if it's not validated, throw an error
Do some operations on A to get a list B (e.g. Regional customers)
Do sth with B
Now I am transforming my app using microservices, but I have trouble in designing the calls.
As B can be deduced from A entirely, I want to just make a single micro service getCustomerA that returns all the dataset A. That means a single database access is needed. That will be a performance plus.
But the problem is, the operations on A to retrieve list B is also part of the business code. So it's more logical to put these codes in Customer microservice side, if we follow domain driven design, in microservice Customer, maybe getRegionalCustomer.
So I want to know, what is the best practice in this case ? Should we priotize the single database call (first case) or it's better to do two calls (but in this case, 2 database calls) ?
Since this is mainly opinion based I can only give you that :-)
From my experience splitting the app into microservices just for the sake of doing it puts technical dogma over technical simplicity and often introduces a lot of unnecessary overhead.
With regard to the database calls I can also tell you from experience that quite often you win performance when doing two simple calls over doing one overly complex one. Especially if you start introducing big joins over many tables or - ouch - subselects in the on clause.
See if the most simple solution works and keeps the code tidy. Constantly improve quality and optimize when the need for it arises. If you have a piece of logic that warrants to be split of into a microservice (e.g. because you want to use a different language, framework or want to offload some calculations) then go for it.
Domain driven design does not tell that each boundle context only can contains one entity, in fact, a bounded context (or microservice) can contains more than one entity when these entites are clearly related, in other words, when they need to be persisted transactionally.
In your case, due to the tight relation between the two entites, the best way is to build only one microservice that do both operations

What is the best pattern to persist objects? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I need to persist objects and I want to keep my data classes as clean as possible. The persisted classes do not feature any business-logic code, but only data with getters/setters.
I'm currently implementing a solution with the Observer pattern. Each time an Observable persisted object is modified, it fires a message to an Observer object that takes care of persistence. This way, the only constraint for the persisted object is to be "Observable". It keeps things clean.
Another solution (maybe better?) would be to implement some DAO pattern, and I'm not very aware of the way it works. Maybe it would look like persistedObject.save(); or persistedObject.readById(id);. But it means I would have to define some DAO interface and then to implement the read/create/update/delete method in each and every persisted class
There are many, many, many answers to this question, data serialization or persistence is a core problem in software engineering. Options include using databases, memory mapped files, binary and textual formats, and more.
My personal favorite for quickly persisting objects is GSON, however your use case will dictate what works best for you.
You mention wanting design patterns for persisting Java objects, and while such patterns are approximately as numerous as there are libraries, here are a couple general suggestions:
Use immutable objects
Use the transient keyword for any fields that are not necessary to reconstruct an object
Avoid defining sanity checks or otherwise limiting the range of acceptable values in your objects - an instance constructed from a deserialize call may not correctly trigger your checks, allowing possibly invalid objects to be constructed
Use your serializable objects to construct more complex objects if you need more sanity checking, e.g. serialize a StubPerson POJO, and have a Person object that can be constructed from a StubPerson only as long as the stub's values are valid
I don't know if it fits for you but since you have only bean classes you could use the Java persistence api.
The DAO pattern is the best one to manage data access and persistence as it has been designed specifically for that.
Considering your needs you will probably have to couple it with some factory pattern in order to manage the different implementations (persistence adapters).
I don't know your requirements but if your application can be used by many persons at the same time you will have to care about concurrent accesses and define a policy (transaction, locking, etc... otherwise people will overwrite data each others).
Regarding your question i'd suggest JDO (with data nucleus as implementation) but the learning curve may be too expensive for your effective needs.

what is difference between persistence and serialization? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I have gone through this link but i'm looking for convincing answer.
Visit http://www.coderanch.com/t/270170/java-programmer-SCJP/certification/Difference-serialization-persistence
Serialization is the process of converting an object to another representation (often binary, though you can serialize to other forms like xml, but the default java serialization mechanism is to a binary form). You can persist that serialized form of the object for reading in (deserialization) to restore that object. Serialization is also used as a mechanism for sending java objects across processes/machines (e.g. with RMI). Serialization is not persistence but persistence is one way it can be used.
Simple answer: Serialization is the process of changing the represenation of an object to another (mainly for the purpose of transfering it over a communication mechanism), whilst persistence targets the purpose of persisting (yes, it is the same word) object states ( for later retrievment) to a physical storage.
Both topics are strongly related, though. Most persistence layers rely on object serialization and deserialization and not too many provide binary dump and restoring of objects.
Interestingly most developers see implementing processes of de/serialization as a rather boring task whilst developing a persistence layer is more part of interest.
Well, obviously, the second one is more complex and the former one is often just a subtask of it.
Persistence - a mechanism to allow you to keep status between executions of your application.
Perhaps a database, maybe files, sometimes cache, in some cases very weird like in the cloud.
Serialization - a way of representing an object in a serial form that allows it to be stored for later recovery.
Often used to persist objects.

Implementation of Autoencoder [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
I'm trying to implement an Auto-encoder by my own in Java. From the theory, I understood that auto-encoder is basically a symmetric network.
So, if I chose to have 5 layers in total, do I have to use 9 layers in training (back propagation) phase or 5 layers enough?
I've been reading theory but they are too abstract and full of math formulas, I could not get any implementation details via google.
What's the usual way of doing this?
An Auto-encoder, in training phase, using back propagation, tries to get the output similar to the input with a goal to minimize the error. It is shown above. The number of layers in the above image are 7 while the actual layers are 4 after the training. So, while training can I implement the back-propagation with just 4? If so, how can I do this?
Simple backpropagation won't work with so many layers. Due to so called vanishing gradient pehenomen, networks having more than two hidden layers won't learn anything reasonable. In fact, best results are obtained with one hidden layer. So in case of autoencoder you should have INPUT layer, HIDDEN layer and OUTPUT layer. No need for more, the Universal Approximation Theorem clearly shows, that this is enough for any problem.
From the OOP point of view it depends whether you plan to reuse this code with different types of neurons, and by type of neuron I mean something deeper than just different activation function - different behaviour (stochastic neurons?); different topologies (not fully connected networks). If not - modeling each neuron as a separate object is completely redundant.

Any suggestions for creating/refactoring wicket components that enable and/or view associations between entities? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
My team has been tasked with creating what you can generically call an entity management application. The 3 primary entities being managed are: Merchants, Organizations, and Contacts
Separate pages have been created for the management of each entity. However, many of the functional patterns on these pages are quite similar. The 2 patterns in particular that are repeated everywhere I look are:
Pattern 1: Associating entity of type Y with entity of type X
Pattern 2: Listing entities of type Y that are already associated with entity of type X
Unfortunately these pages were created adhoc by multiple developers. This has resulted in a hodge-podge of solutions, none of which are readily reusable. So what I want to do is abstract out the two patterns I identified above into reusable components, but I am fairly new to wicket and I'm unsure of the best strategy to use.
My first thought is to encapsulate the patterns in two parameterized component classes that extend panel. But I would like to hear from those with more experience.
Any suggestions?
EDIT:
Forgot to mention, for any wondering, that any of the 3 entities can associate in a many to many relationship with either of the other 2.
Sounds like a pretty good idea to me. Additionally I'd check if any specific logic (like DAOs, Validators and stuff) could be provided via Dependency Injection (Google Guice comes to mind) so you could just use one panel with different handlers/workers/dataproviders for your different usecases.
It's hard to be more specific since your question is kind of broad and a little bit on tue vague side..

Categories

Resources