Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I already have a blog application which is built on Spring-Boot as a monolith.
There are 2 entities.
user
post
And the mapping is one to many.
A single user can create multiple blog posts.
How can i recreate the same functionality as separate microservice applications.
So far on researching through internet, what i see people saying is create database per service etc.
Suppose if i create 2 services say
UserService(which has separate DB and CRUD operations associated with it)
PostService(which has separate DB and CRUD operations associated with it)
How can i make communications in between them.
In the monolith app the POST entity has createdBy mapped to User.
But how this works in Microservices architecture?
Can any one please help me how can i design such architecture?
First list out the reasons why you want to break it in micro-services. To make it more scalable (in following scenarios for example).
Post comments becomes slow, and during this period registration of
new Users to remain unaffected.
Very few users upload/download files, and you want general users who simply view comments and post comments to be unaffected, while upload/download files may remain
slow.
Answers of the above question and analyzing,priotizing other NFR's shall help to determine how and what to break.
Additionally,
Post service only needs to validate whether the user is a valid logged in user.(Correct?)
User Service does not really need to communicate with post service at all.
Further you might want to decouple other minor features as well. Which in turn talk to each other which can be authenticated via other means like(Certificates, etc.) as they will be internal and updating some stats(user ranking), aggregates data etc.
The system might also have a lot of smaller hidden features, which might or might not have to do anything with Post Service at all, which can be separated in terms of different micro-services(like video/file/picture/any binary content upload/download) also and prioritized based on computation power needed, hit frequency and business priority.
After breaking it in to micro-services, you need to run some stress tests (based on current load) to know which services needs replication and which not and needs a automatic load balancing. Writing stress load first before breaking can also help to understand which features need to be move out of the monolith first.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 days ago.
Improve this question
I'm building a javafx program that allows users to keep track of invoices for services rendered and I've come accross a question that I am struggling to answer on my own.
Now the program stores data in a mySQL server which is shared among all users of the javafx program. As of now (for simplicities sake) the program gathers the data once from the database upon launch and then allows the user to modify the data in the server. This implementation leads to a lot of problems...
Lets say user1 and user2 log into the javafx program at the same time (thus guaranteeing that they have the same data from the database). Suppose that both go on to modify some data and at the end of the day they log off.
The following day user1 and user2 would log back on and find that a lot of data was modifiied because the databaase will only return the most recent change from either user. That is if they both happened to work on the same invoice form then the user who updated the database last would have their data saved and the opposing user would be confused as to why the form has changed from their previous input.
This problem made me recall my days in high school where I marvelled at how google docs essentially resolved this multi user issue. I did a bit of light research on how google docs keeps track of changes among multiple users but i couldn't come up with a solution based on differential synchronization (not to mention most of it flew over my head anyway).
I tried coming up with my own solution - perhaps having the mySQL data refresh every 30 seconds or so but this kind of implementation also has its own set of problems and doesn't quite resolve the issue.
I've seen some other software that doesn't allow multiple users to modify the same invoice form at the same time (that is two users could create and modify two different invoices at the same time but both would not be allowed to modify the same invoice at the same time). This implementation could work but I'm still iffy about implementing in this way and wanted to see if there was another (possibly better/more elegant approach) to this problem, hence the following quesion.
What are some standard methods/implementations of providing multi-user access to form/data based software?
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I have two micro services(ms) ms1 and ms2. In both of the ms there is duplicate reusable code(like hashing/security related/orm/etc). Just FYI this usable
code can sometimes maintain some state as well in DB (if it matters by any chance)
There are two ways, I can proceed
Either extract the reuable code as separate library and include in both ms
Create separate ms for this reusable code and expose it as rest end points
If I take approach 2, advantage is I just have to redeploy the ms3 in case of any change. If take approach 1, I need to redploy both ms. At the same
time approach 2 will require separate maintenance/resources/monitoring.
Which one is more ideal approach in terms of system design considering hardware resource is not a challenge. I just mentioned two microservicebut in some cases
there are more than two ms having duplicate code.
I am not sure what is the criteria which can help me to decide whether to go towards shared library or micro service ?
Update :-
I have got some clarity from below blogs but still have question. Will think and post anew question if required.
https://blog.staticvoid.co.nz/2017/library_vs_microservice/
https://dzone.com/articles/dilemma-on-utility-module-making-a-jar-or-separate-2
Microservices are only one of architectural styles. In some cases it is better, in some it is worse than other styles. If you don't use microservices it not mean that your architecture is not good.
If you still want to have microservices, then none of these approaches (shared library vs. library as a new "microservice") is good.
I'd suggest to consider following.
Microservice approach does not mean, that each end point should be encapsulated into a separate microservice. It is normal, that one microservice provides several different end points. If this is your case, then put your two services into a sinbgle microservice and make them reachable via two different end points. Then it is fine that both of them share some classes.
Microservices should normally have independent persistence layer. If there is a strong dependency on the common persistence layer, check, what was the reason to split them into different microservices. Do they really work with different business domains? Can these service be developed and deployed independently on each other? If not, then may be there is no reason to put them into different microservices and it could be better to put them into a single microservice. Then it would be fine if they share some classes.
A good microservice should be provide functionality for some domain. If you put shared code to a separate microservice, then it may be that your shared "microservice" does not provide any functionality for a domain, but is just a wrapper for utilities. That would be not a microservice.
If you have strong reason to separate your services into two different microservices, then duplicate the code. Each microservice should be independent on the others. It should be possible to replace database and to replace any classes of one microservice without affecting the other one. One normal way to make them independable is duplicate the classes that you (currently) consider as shared. If the services are really independent with the time this duplicated code will change and will be different in each microservice. If you have to change this code in both services simultaneously, then it means that your split is not correct and that what you have are not microservices.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I have been doing research on micro services. I have used spring boot seeing how simple it is to start with. During my research i have read that is a good approach that every micro service, that is the ones that do data access from a database, have its own database.
I am curious how this works when starting multiple instances of that same micro service. Can those instances of the same micro service work with just one database or they also need a separate database? The dilemma for me is the data would be different across multiple databases. How does load balancing micro services work for such situations?
edited after the first 3 comments
I appreciate the comments. I feel i was lacking in explaining my thoughts behind this question. I am used to building monolithic applications. I made use of spring and hibernate (hibernatedaosupport) and lately also hibernate envers. I use transaction management of spring to manage the commits and rollback situation of the database. This has worked for me so far. I have started looking into micro services and so far am unable to find a proper explanation of how spring transaction management used with hibernate and envers as a micro service would work with a single database. I can understand just one instance of this micro service working, but i am curious if multiple instances of this micro service would work properly with just one database. Especially considering the fact that hibernate would cache objects of the database for performance reasons, not to mention envers and its actions.
There is no requirement about that micro-service must have a different database, you could share one database across all your micro-services or have one per micro-service. It depends on you and architectural decisions taking into account the different tradeoffs.
If you decide one database per micro-service and you have many instances of the same micro-service. You must use just one database (like with monolith). About your concerns of Hibernate and Cache you must handle the cache in different way, by example using Hazelcast (https://hazelcast.com/use-cases/caching/hibernate-second-level-cache/) or EhCache.
Anyway the design patterns are just best practices with different tradeoffs, you must understand the advantages and disadvantages of every pattern for later take a decision.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
I have to submit a project in my college for which I have only two months! I already have an idea of what to make but don't know which technology to go for. I want to use something latest so as to make my project more efficient and flexible.
I wanted to make something like "Attendance Management System" in which we can take attendance of students and save the records on underlying database, also to perform some kind of data mining on the data (to find some interesting patterns like the_most_attended_lecture or to apply some probabilistic model to estimate the_next_possible_bunk or analysis based on an individual student record to compute anything interesting...) and then to develop an android app for the UI that can handle request and response to the database.
I'm really confused as what to go for? Currently I have no knowledge of the following but my friend suggested me to choose among them: node.js (with express framework) REST API, PHP, JSP, JSON, and MongoDB.
I would really appreciate your help guys. Please help. Thanks
Lets try to decide the technology stack according to your requirements.
1. Latest technology - Although you didn't give any justification for this requirement. But as you want, latest fads going on are for web server are node, go lang, nginx(if you happen to choose php in the end) and mongo, elastic search for data store.
2. Less amount of time - You have only 2 months to learn the technology, build the prototype , design the db schema, implement everything and test. Hence I will suggest you to go with node.js or php(I am assuming you are familiar with JS and php).
3. High database IO - I don't know what scale you will be working upon but the only major thing you server will be doing is DB IO, hence you should choose some non-blocking technology and among them most famous is NODE.JS.
Node.js is something which is fulfilling every requirement.
If I were you, I would have choose express.js (express init and you are ready to go), Mysql (If you are not familiar with any NoSql as mysql seems to be fulfilling every requirement). And android app could be anything like cordova as app is doing nothing but HTTP request and some presentation of data.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
Given a series of URLS from a stream where millions could be bit.ly, google or tinyurl shortened links, what is the most scalable way to resolve those to get their final url?
A Multi-threaded crawler doing HEAD requests on each short link while caching ones you've already resolved? Are there services that already provide this?
Also factor in not getting blocked from the url shortening service.
Assume the scale is 20 million shortened urls per day.
Google provides an API. So does bit.ly (and bit.ly asks to be notified of heavy use, and specify what they mean by light usage). I am not aware of an appropriate API for tinyurl (for decoding), but there may be one.
Then you have to fetch on the order of 230 URLs per second to keep up with your desired rates. I would measure typical latencies for each service and create one master actor and as many worker actors as you needed so the actors could block on lookup. (I'd use Akka for this, not default Scala actors, and make sure each worker actor gets its own thread!)
You also should cache the answers locally; it's much faster to look up a known answer than it is to ask these services for one. (The master actor should take care of that.)
After that, if you still can't keep up because of, for example, throttling by the sites, you had better either talk to the sites or you'll have to do things that are rather questionable (rent a bunch of inexpensive servers at different sites and farm out the requests to them).
Using HEAD method is an interesting idea by I am afraid it can fail because I am not sure the services you mentioned support HEAD at all. If for example the service is implemented as a java servlet it can implement doGet() only. In this case doHead() is unsupported.
I'd suggest you to try to use GET but do not read the whole response. Read HTTP status line only.
As far as you have very serious requirements for performance you cannot these requests synchronously, i.e. you cannot use HttpUrlConnection. You should use NIO package directly. In this case you will be able to send requests to all millions of destinations using only one thread and get responses very quickly.