how to define idempotent behaviour incase of stateful operations? [closed] - java

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
In several occasionS people claims something was 'idempotent' because it wasn't stateful in-memory, even though its consumer effect was recording transactions.
If reading capabilities don't have to be idempotent, getNextIterator() is a reading capability that isn't idempotent as it would increment the iterator. A banking request for a balance wouldn't be idempotent as the request would create an audit log. The returned result might be the same for two subsequent calls (if not changes have happened) but the log entry would be different.

Saying that "logs were created means it isn't stateless" is absurd. Is a call to the server that does nothing "stateful" because a tiny amount of power was used and so your power bill for the month will be a tiny bit higher than it would have been if the call had not been made? No.
Statefulness includes all aspects that matter to the transaction (in-memory, persistent storage, calls to other services, etc). "Idempotent" means that the call may be retried without ill side effects.
Your example of ticking a counter may still be considered idempotent if it doesn't change the business effect of the call or its response to the caller.
Changes internal to the call, that doesn't have any real impact to the business process concerned and that aren't exposed to the caller, are irrelevant to the caller.

Related

How to limit the number of request that get access to an instance? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I have a web application with Java server, and the Java application is getting requests from the web.
The problem is that there is an instance of an object that can handle only one request each time, and the web requests are coming before the object finishing its job.
How do I create some sort of "connection pool" for these requests? Or is there any other way that I can limit the access to this object instance?
I am using the synchronized declaration at some method, and it is doing a pretty good job, but the problem is that its is not ordered. How can I control the order of the incoming requests for this method?
You can try to synchronize the method that is shared by all the request thread by using synchronized keyword.
The drawback of this is that it will affect your performances.
SO TRY to keep the synchronized block small.
You could consider the requests as commands and store them on a queue and process them 1 by 1. This has however consequences for the user (as already mentioned before) that the response time will increase. To overcome this a bit the handling of the command might be done asynchronously, meaning that the user will invoke the command and get a answer back that the command was received correctly. Once the server is done with handling the command the server can call back the user notifying him that the command is (successfully) processed.
Note that this can become a complex solution and it might be better to check whether you can't remove/refactor the singleton.

How to divide large Http Response [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I have Spring Boot RESTful service API. The consumers are other applications. One of the app's controller returns up to 1 000 000 strings on request.
What is the best practice of splitting such responses in Spring applications?
Update:
I figured out the response is needed for developer's needs and would be executed only once. So it's better to create the script for this operation.
Thanks for the answers.
Here is a good example to use multi part request in spring boot: https://murygin.wordpress.com/2014/10/13/rest-web-service-file-uploads-spring-boot/
However, I would prefer to think about your problem from an architectural point of view.Why should the rest returns such a huge response?And is it necessary to really returns all those results? There are a few factors that might help me to give a better answer.
This is the kind of situation when there is always a trade off.
1)The basic question is, can't you provide additional(they don't have to be mandatory, they can be optional parameters) to reduce the amount of returned results?
2)How frequent does your data change?If they don't change pretty often(let's say once a day) then you can introduce a sort of paging mechanism so you return only a segment of the result. From your side , you can introduce a caching mechanism between your business logic layer/data base and the rest client.
3)If your data are changing frequently(like you are providing list of flight prices), then you can introduce a caching layer per client ID. You can cache the results from your side and send it to the client divided into several requests. Of course you will have to add a time stamp and expiry date for each cached request or otherwise you will face memory issues.
4) This leads us to another question, where does the pain comes from?
Does the application clients complain about not being able to handle the amount of data they receive? Or do they complain from your response time of your service?Or are you having performance issue on your server side?

Best massive data persistent storage with TTL? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
We are building a system which needs to put tons of data into some persistent storage for a fixed amount of time - 30 to 60 days. Since the data is not critical (we can lose some for example when virtual machine goes down) and we don't want to pay the price of persisting it with every request (latency is critical for us) we were thinking about either buffering & batching the data or sending in an async manner.
Data is append only, we would need to persist 2-3 items per request, system processes ~10k rps on multiple hosts scaled horizontally.
We are hesitating between choosing Mongo (3.x?) or Cassandra, but we can go with any other solution. Does anyone here have any experience or hints in solving this kind of problem? We are running some PoCs, but we might not be able to find all the problems early enough and pivot might be costly.
I can't comment on MongoDB but I can talk to Cassandra. Cassandra does indeed have a TTL feature in which you can expire data after a certain time. You have to plan for it though because TTL's do add some overhead during a process Cassandra runs called 'compaction' - see: http://docs.datastax.com/en/cassandra/2.1/cassandra/dml/dml_write_path_c.html
and: http://docs.datastax.com/en/cql/3.1/cql/cql_using/use_expire_c.html
As long as you size for that kind of workload, you should be OK. That being said, Cassandra really excels when you have event driven data - things like time series, product catalogs, click stream data, ETC.
If you aren't familiar with Patrick McFadin, meet your new best friend: https://www.youtube.com/watch?v=tg6eIht-00M
And of course, the plenty of free tutorials and training here: https://academy.datastax.com/
EDIT to add one more idea of expiring data 'safely' and with the least overhead. This is one done by a sharp guy by the name of Ryan Svihla https://lostechies.com/ryansvihla/2014/10/20/domain-modeling-around-deletes-or-using-cassandra-as-a-queue-even-when-you-know-better/

What happens if a method is called too many times at the same time [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
Let us say that I have a method that writes to a file or database. What if different parts of application calls this method too many times at the same time or in same interval of time. All those method calls are maintained in some stack/queue in memory and wait for previous requests to be served ?
Writing to the same file is platform dependent, like Unix allows concurrent writes to the same file.
You need to see the synchronization techniques - how you want to manage the read write operations.
If you see from DB perspective the db engine handles it properly - whichever comes first will be served. The next insert would depend on the first insert(in case you already inserted with the same key in the previous operation - then obviously it ll throw an exception)
Also I would say if different parts of your application are appending data to the same file at the same time - there could be design flaw and you need to reconsider the design

Spring Roo: Trigger Action on Update – Best Practice [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I have played a bit with Spring Roo, now I am asking myself what is the Roo suggested way or best practice way to trigger an action after an object update.
Let me explain it with an example:
Assume I want to implement a web based Bug Tracker (I don’t want to do this, it is only an example). A bug tracker, is about Issues. Each Issue has a state (New, Confirmed, Assigned, In Progress, Resolved.), a title and some other fields.
The user has a web form where it can enter and update all fields (state, title, …). When the state of an issue switches from ‘In Progress’ to ‘Resolved’, the system should send an email to all persons that are interested in the bug (How this list of interested persons is maintained, is out of scope for this problem).
The problem that I have is: How to trigger the email sending process when the state is changed (in a Roo application)? Because there are several problems:
How to determine if the issue state is changed?
We need to make sure, that the message send after the issue is complete updated (for example it would not work, to put the trigger in the setState() method of the Issue, because it is not guaranteed that the other values from the form (title…) are updated before the state is changed.
The mail must only be sended if the form was valid and the Issue is likely to be saved (I do not facing the problem that the transaction cannot be committed – this will be another problem)
Does anybody have a good, testable (unit tests) and maintainable solution? Maintainable means especially that the code to handle this should not be placed in the controller, because it will be used in several controllers and someday somebody will implement an new controller and he will likely forget to handle this email concern.
You can use the #PostUpdate annotation, a JPA life cycle callback listener.
class Issue{
#PostUpdate
protected void onPostUpdate(){
//This method wil run after the update
if(this.state == Resolved){
//...
}
}
Here is more information about the available callbacks.

Categories

Resources