I have list of services with time windows. Is it possible to configure JSprit in way that some time windows must be met (hard time windows) while other are configured to be soft time windows?
Thanks in advance for help
Hard time windows can be implemented by assigning a time window to a service via Service.Builder. By default, services do not have any time windows. Soft time windows can be considered by implementing core.problem.constraint.SoftActivityConstraint
https://github.com/jsprit/jsprit/blob/master/jsprit-core/src/main/java/jsprit/core/problem/constraint/SoftActivityConstraint.java
Here, you can penalize "late" arrivals. Keep in mind that the insertion of a new activity does not only have a local impact, i.e. on the two neighboring activities, but it can have an impact on the whole route since it shifts all subsequent activities. This, in turn, can yield additional penalties that need to be considered as well. To evaluate this in constant time, you require some sort of approximation of the global impact. Once you add the soft constraint, you need to account for it in your objective function as well (see for example and its respective code examples to see how this works).
Related
I am trying to find out if my approach of using secrets in my application can be not a good practice or potential security issue.
Our deployment pipeline will fetch the secrets from vault and provided to application as an environment variable which will be in the format of JSON string. When I have to access one secret in my application,
I am letting #Value convert JSON String into Map KV's and using .get() to get desired value. Please let me know if I this can be issue or better way to do this.
SECRETS - Provided as Environment variable
Ex: export SECRETS = "{"hello":"world","foo":"bar"}"
#Bean
public Object method(#Value("#{${SECRETS}}") Map<String, String> secrets) {
new Object(secrets.get("hello")); //passing secret as constructor argument
}
This isn't how security works. You can't point at measured and code styles and claim that "this one is 94.1835% secure" or some such nonsense.
Instead, you first create thread models: Which attack vector are you trying to protect against?
Note that there are attack vectors that are incredibly had to protect against (read: Costs millions and imposes a lot of inconvenience on legitimate users). That is, "Security at any cost" is a stupid thing to say and not something you can put in practice.
That is why you want those attack vectors: As a programmer and a technically capable operator you can explain precisely which attack vector(s) a given application is capable of defending against and which ones they aren't, as well as which measure(s) will stop a given attack vector and how much they cost.
It's up to a product decision maker to take the list of attack vectors and cost(s) to mitigate them, and decide which ones to mitigate and which ones to leave. There will always be some you leave out.
For storing sensitive authentication data like this, the attack vectors fall in 3 broad strokes:
Outside parties that do not have physical access to the machine the code runs on, nor are they capable of logging in on that machine as either a 'root' user or the user under which the server runs
The only way they are going to read that map is if you write code that somehow leaks it. For example, you have some web page endpoint that is programmed so that if I go to, say, https://www.user16353001.com/getAuthInfo?k=hello, that it gives world back, i.e. you programmed it that way.
You solve that by.. not doing that.
Parties that have physical access to the machine, or can log in as root, or can log in as the user under which the process runs
Then you're in deep trouble and in general nothing is safe anyway - if an attacker can do that, you have far bigger problems than 'oh dear they can read the auth map'. However, there is a vanishingly exotic situation where you may still care about this.
IF the attacker can access the box pretty much whenever they want and for long periods of time, no tricks in the code are ever going to solve the problem. So we now pile yet another exotic condition on top of more exotic conditions and, let's say we want to protect against an attacker that has physical access / root / user access, but only intermittently.
And then we pile a third condition on top and that is: You, the server code, only need the auth info sometime during the bootup procedure and then you don't need it any more.
In this one case there is a thing you can do: Store the data as a new char[], and then explicitly zero out the char array when you're done with it. In addition, do not ever let the password be transformed into any other data type. Never turn it into a string, bytebuffer, or anything else.
This is in fact why e.g. Console's readPassword() method returns a char[] and not a String. For this one exotic-wrapped-in-an-exotic-wrapped-in-an-exotic case. I think it's a ridiculous measure no sane person who judges attack vectors could possibly identify as reasonable to spend even one smidge of time on, but then that's why I said: You just explain the technical parts and code it. Someone else can make the call, even a bad one.
The reason why char[] works and nothing else will, and only if you never convert that char[] to anything else, is because e.g. a string is an object that lives on the heap and will continue to do so until the garbage collector gets rid of it, and that might be days from now. Java's garbage collectors that run immediately once some object turns into garbage. They only run when neccessary, which can be 'never' if you have a simple app and tons of memory. In addition you can't tell java: Please garbage collect this object, right now. Even System.gc() doesn't guarantee anything. So, once it's a string, there's a chance that the password stays in your process memory for days and there's nothing you can do about it. If it's in char[] form you can zero out that array and have very slightly more guarantee that the password is now no longer in your process memory. Which is worth almost nothing, so, mostly, this doesn't matter.
Just calling out that it is the one thing you can do, useless as it is, and why some auth stuff takes/provides char[] instead of String.
TPM / T2 systems
There's one final thing you can do to truly secure it way down, protect even against long-lasting root access: Security chips.
On windows, there's the TPM chip that most modern windows-targeted hardware has. On macs there's the T2, I think baked into the Mx family of apple's own chips these days.
These systems are extremely hard to integrate from java code (you'd need some native bridges), basically nobody does this, and the auth protocol has to be a challenge/response affair (i.e. a simple password cannot be protected using TPM infrastructure), further complicating matters.
If you can jump through all those hoops, you can set up a system whereby the secret is loaded into the TPM/T2 chip, by the TPM/T2 chip, and even a few million dollars of hardware cannot get it out, by anybody.
You need to look really really far to find implementations of this principle. It'll also be extremely complicated to try to do this with java, and most existing auth protocols simply cannot be adopted to use these. So, you can probably forget about their existence.
I was exploring the CP-SAT APIs for fetching all solution for given set of constraints.
As per the API documentation, onSolutionCallback() function is called for every solution found. However if I need to find all solutions for a given model, is there a way to detect the last solution or the feasibility of no more solutions through onSolutionCallback() function or other means?
I found that searchAllSolutions() API can be used and we can set termination conditions based on time or number of solutions. Assuming I can wait unlimited amount of time, how do I detect that there are no more solutions feasible?
https://developers.google.com/optimization/cp/cp_tasks
Another related question:
Is there any remote chance for a CP SAT solver to run into a non-deterministic state or run into infinite loops (or such) even when there is a feasible solution possible for given set of constraints?
I plan to use the CPSAT for a production usecase and hence would like to know its determinism and upper bounds for execution.
Edit: Added the second question.
My application has a number of objects in an internal list, and I need to be able to log them (e.g. once a second) and later recreate the state of the list at any time by querying the log file.
The current implementation logs the entire list every second, which is great for retrieval because I can simply load the log file, scan through it until I reach the desired time, and load the stored list.
However, the majority of my objects (~90%) rarely change, so it is wasteful in terms of disk space to continually log them at a set interval.
I am considering switching to a "delta" based log where only the changed objects are logged every second. Unfortunately this means it becomes hard to find the true state of the list at any one recorded time, without "playing back" the entire file to catch those objects that had not changed for a while before the desired recall time.
An alternative could be to store (every second) both the changed objects and the last-changed time for each unchanged object, so that a log reader would know where to look for them. I'm worried I'm reinventing the wheel here though — this must be a problem that has been encountered before.
Existing comparable techniques, I suppose, are those used in version control systems, but I'd like a native object-aware Java solution if possible — running git commit on a binary file once a second seems like it's abusing the intention of a VCS!
So, is there a standard way of solving this problem that I should be aware of? If not, any pitfalls that I might encounter when developing my own solution?
The problem is how to store (and search) a set of items a user likes and dislikes. Although each user may have 2-100 items in their set, the possible values for the items numbers in the tens of thousands (and is expanding).
Associated with each item is a value say from 10 (like) to 0 (neutral) to -10 (dislike).
So given a user with a particular set, how to find users with similar sets (say a percentage overlap on the intersection)? Ideally the set of matches could be reduced via a filter that includes only items with like/dislike values within a certain percentage.
I don't see how to use key/value or column-store for this, and walking relational table of items for each user would seem to consume too many resources. Making the sets into documents would seem to lose clarity.
The web app is in Java. I've searched ORMS, NoSQL, ElasticSearch and related tools and databases. Any suggestions?
Ok this seems like the actual storage isn’t the problem, but you want to make a suggestion system based on the likes/dislikes.
The point is that you can store things however you want, even in SQL, most SQL RDBMS will be good enough for your data store, but you can of course also use anything else you want. The point, is that no SQL solution (which I know of) will give you good results with this. The thing you are looking for is a suggestion system based on artificial intelligence, and the best one for distributed systems, where they have many libraries implemented, is Apache Mahout.
According to what I’ve learned about it so far, it can do what you need basically out of the box. I know that it’s based on Hadoop and Yarn but I’m not sure if you can import data from anywhere you want, or need to have it in HDFS.
Other option would be to implement a machine learning algorithm on your own, which would run only on one machine, but you just won’t get the results you want with a simple query in any sql system.
The reason you need machine learning algorithms and a query with some numbers won’t be enough in most of the cases, is the diversity of users you are facing… What if you have a user B which liked / disliked everything he has in common with user A the same way - but the coverage is only 15%. On the other hand you have user C which is pretty similar to A (while not at 100%, the directions are pretty much the same) and C has marked over 90% of the things, which A also marked. In this scenario C is much closer to A than B would be, but B has 100% coverage. There are many other scenarios where most simple percentages won’t be enough, and that’s why many companies which have suggestion systems (Amazon, Netflix, Spotify, …) use Apache Mahout and similar systems to get those done.
We have a java based product which keeps Calculation object in database as blob. During runtime we keep this in memory for fast performance. Now there is another process which updates this Calculation object in database at regular interval. Now, what could be the best strategy to implement so that when this object get updated in database, the cache removes the stored object and fetch it again from database.
I won't prefer any caching framework until it is must to use.
I appreciate response on this.
It is very difficult to give you good answer to your question without any knowledge of your system architecture, design constraints, your IT strategy etc.
Personally I would use Messaging pattern to solve this issue. A few advantages of that pattern are as follows:
Your system components (Calculation process, update process) can be loosely coupled
Depending on implementation of Messaging pattern you can "connect" many Calculation processes (out-scaling) and many update processes (with master-slave approach).
However, implementing Messaging pattern might be very challenging task and I would recommend taking one of the existing frameworks or products.
I hope that will help at least a bit.
I did some work similar to your scenario before, generally there are 2 ways.
One, the cache holder poll the database regularly, fetch the data it needs and keep it in the memory. The data can be stored in a HashMap or some other collections. This approach is simple and easy to implement, no extra framework or library needed. But users will have to endure dirty data from time to time. Besides, polling will cause a lot of pressure on DB if the number of pollers is huge or the query is not fast enough. However, it is generally not a bad one if your requirement for real-time is not that high and the scale of your system is relatively small.
The other approach is that the cache holder subscribes the notification of the data updater and update its data after being notified. It provides better user experience, but this will bring more complexity to your system because you have to get some MS infrastructure, such as JMS, involved. Developing and tuning is more time-consuming.
I know I am quite late resonding this but it might help somebody searching for the same issue.
Here was my problem, I was storing requestPerMinute information in a Hashmap in a Java filter which gets loaded during the start of the application. The problem if somebody updates the DB with new information ,the map doesn't know about this.
Solution: I took one variable updateTime in my Java filter which just stored when was my hashmap last got updated and with every request it checks if the current time is time more than 24 hours , if yes then it updates the hashmap from the database.So every 24 hours it just refreshes the whole hashmap.
Although my usecase was not to update at real time so it fits the use case.