I am an unwilling JSP/Java noob. I've been asked to hurriedly write up a system for generating secure urls from one site to another. The actual request string (must be passed as GET request) needs to be encrypted or otherwise obfuscated so that the user cannot easily change it to request someone else's document. Because of limitations in the environment, I cannot simply manage the request in a session and really must do it this way.
A sample of what I need:
page1.jsp:
a 7 digit number is generated by our system and needs to be passed to http://otherserver.com/page2.jsp. If the user sees this number, it will be obvious what it represents, and no other number can be used for this purpose.
The number should be encrypted or otherwise obfuscated in page1.jsp code and built into a URL to page2.jsp that can be decrypted / unobfuscated easily.
Thank you for your help!
I wouldn't bother to try to obfuscate it.
Instead, if the two servers can share a common secret, you can use keyed-hashing (see javax.crypto.Mac) to generate keyed hashes for the document number, which is passed to the other server along with the document number.
The target server can then easily verify that the keyed hash corresponds to the document number, and easily detect attempts to modify it.
Related
I have a simple web application that allows the user to download a pdf containing some dynamic information.
Then the user should sign the document and re-upload it using my application.
Now, I need to check wheter the user has changed the PDF content before signing it.
Is there a way to check this? I've tried checking the byteRange, but it seems that the content of the signed pdf is totally different:
Original file size: 2280
Signed file size: 31485
Byte range: [0, 11433, 29635, 1850]
Thanks in advance.
I assume you sign the PDF with an integrated, embedded signature, not a detached signature file. You don't explicitly say so and locus2k appears to assume otherwise, but for a detached signature your question IMO would not make sense.
Now, I need to check wheter the user has changed the PDF content before signing it.
This is very difficult because PDF signing services apply a number of different changes to the original PDF before signing, in particular if it doesn't have a prior signature. E.g. they may
linearize the file (which implies sorting objects in the PDF file in a specific order),
fix minor errors in it,
optimize some structures,
create appearances for form fields without,
...
Thus, all the differences you determine must be checked, they may be part of the signing process and not part of a prior manipulation by the user.
Of course you can check specific aspects, e.g.
extract text from the original file and the signed copy and compare,
render the original file and the signed copy and compare (allowing for differences only in a predefined signing area),
...
but there may be seemingly minor changes you overlook this way but which can considerably change the appearance of the document.
There are some means to make the job easier, e.g. you can sign your original PDF first with an author signature in which you declare which changes you allow to the document. This should make it at least difficult for the user to use unmanipulated standard software to do disallowed changes before signing. Furthermore, this restricts changes by the signing software to incremental updates, preventing complete PDF overhauls.
In your code you then would check for the presence and validity of this author signature by you. If there is no issue in those, you "merely" have to inspect the incremental updates.
Beware, though, even checking whether these incremental updates contain unwanted changes is difficult. On the PDF Insecurity web site a number of attacks are described which until their publication could make a fool out of the validation routines for allowed/disallowed changes of widely used PDF validators, Adobe Acrobat among them.
Thus, your task is definitively non-trivial, even if reduced to incremental update analysis.
There is one main mechanism to be able to do this (MKL partly mentions this):
CERTIFICATION SIGNATURES
There are two different kinds of signatures:
(1) certification signature (also called 'document signature' or author 'signature')
(2) approval signature (also called 'user signature')
Basically you as the document author sign the document with a certification signature. This signature is a bit different than the other ones. (E.g. it has the coordinates 0 0 0 0 and has to be the first signature in the document...) Applying the certification signature has the following advantages:
the author can specify what the user is allowed to do and what not
The user can also verify that the document has not been changed (intentional or unintentional) when receiving it
All changes must be done in the incremental update mode otherwise the user will break the certification signature.
So if you appy a certification signature and the user adds his changes as an incremental update you can then check the incremental update to see what was changed. As indicated by MKL this is however not always trivial and in my opinion depends on your use case:
Is it that you want to know whether a user
(a) changed your dynamic content (filled formfields, added some comment, ...) to extract those changes and process is for further use? Or do you want to know whether a user
(b) did manipulate the PDF, changed some text, added images or the like, so you want to detect fraudulent changes?
Both is possible but of varying complexity. It is easy to extract the changed form data or annotation content. Other changes are a bit more tricky to extract and detect. But this might also depend on the tool you use. Some might offer more support for this than others...
The easiest way is store the hash value of the file before it is sent to the user then hash the file again when the user submits it. If the hashes are different then the file was modified.
I'd recommend using Apaches common-codec to do this something like this:
public String getSha1Hash(Path file) throws IOException {
try(InputStream is = Files.newInputStream(file)) {
return DigestUtils.sha1Hex(is);
}
}
Then in your function that you send the pdf to you can do something like:
Path pdf = pdfPath; //the path to the pdf file
String outgoingHash = getSha1Hash(pdf);
store(pdf, outgoingHash) //store the pdf filename and its hash in a db or some other way
When the user submits the pdf you would then do:
Path pdf = pdfPath; //path to the incoming file. This might be a stream so adjust for that
String incomingHash = getSha1Hash(pdf);
String originalHash = getFileHash(pdf);
if (incomingHash.equals(originalHash)) {
//handle same hash value (file wasnt modifieD)
} else {
//handle changed file
}
This is probably quite a simple, newbie question for seasoned Web developers, which I am not, and googling around does not help.
I have a very simple webapp hosted on Heroku, the code of which is here. It has two JSP pages, one index, one with the validation results, nothing fancy. The two JSP pages are here (index.jsp) and here (results.jsp).
The problem is with the validation servlet: it is a POST, and is triggered, when using the app itself, via an input button in index.jsp. But I have tested that it will also work if I call the servlet directly... And I don't want that.
Is there a way to reliably ensure that this servlet may only be called when coming from the index page (and send a 403 otherwise)?
One way I've used is to have the input form on index.jsp include a hidden field which contains an md5 hash which the results.jsp can also calculate. I use the md5 hash of the client machine's IP address concatenated with a shared secret phrase.
I guess for a given client IP address the hash is always going to be the same so you could also salt it with another value (like current time) which is passed in another hidden field for inclusion in the calculation by results.jsp.
You could generate a fingerprint (for example UUID.randomUUID()) when the first page is loaded and save the value in the current session.
When you post the result to the validation servlet you include that fingerprint as a hidden field and check that the fingerprint exists on the session.
There is no way you can be 100% sure of that. Eventualy you can check the referer but it's possible to forge it. You can also set a cookie when loading the index.jsp and check the value in the servlet. But it's also possible for someone to load index.jsp to retrive the cookie and then use it to post on the validation servlet. Same think with an input hidden with a hash.
i intend to use JSON to implement a client server communication. My goal is for a Java-server to receive data via HTTP-Post from an Iphone-app.
I'm concerned about the fact of how I can be sure, that the data the Java-server receives only come from the Iphone-app? It may be possible that somebody else is catching the Java-Server URL and send rigged data?
Do I have a chance to recognize that? SSL encrypts transferred data only, but doesn’t solve the problem, i think.
kind regards
stormsam
You could send a token that is hardcoded into your application. Everything that comes without this valid toke should be rejected. Or you can use .htaccess and specify a user and password within your app.
You could use public key encryption, with users having their own keys and you keeping track of who are the legitimate users. This is the most reliable scheme I can think of. That, or giving each user a username and password. However, it's probably a lot more trouble than it's worth, and still doesn't protect against users that have registered with you but are still malicious.
Embedding a token in your application and then sending it with requests, as Cyprian suggests, is probably the easiest scheme and would probably work pretty well, but might be relatively easy to reverse engineer.
A somewhat better solution might be to program into your app a function that transforms any given input into an output; then, your server responds to a request by giving the app a piece of data to transform, and checks the result. A client that passes the test gets a session token which allows it to proceed. This does require an extra round-trip for authentication, though. And it's still not immune to being reverse engineered, since all the information needed to do so is stored in the app that's present on the user's machine.
Assuming you can reasonably protect your iOS app from being dissambled, you could use "signed requests" like the Facebook API (and probably others):
You'll need a shared secret on both client and server (e.g. a random string/byte array). The iOS app then hashes all request parameters plus the shared secret and appends the hash as additional request parameter, e.g. myserver.com/ws?item=123&cat=456 becomes myserver.com/ws?item=123&cat=456&hash=1ab53c7845f7a. Upon receiving a request, the server then recomputes the hash from the regular parameters and the shared secret and compares it to the value sig parameter. If both are equal, the request is considered valid (assuming integrity of your iOS app).
An advantage of this method is that it doesn't require additional round trips to fetch any one-time/CSRF-prevention tokens and does not require encrypting requests and responses (as long as you only care about the integrity of requests, not confidentiality).
You might have to take a look at this. It may give you some directions.
This question already has answers here:
Handling passwords used for auth in source code
(7 answers)
Closed 7 years ago.
I'm writing a Java class that connects to a server and reads messages in a given queue.
I would like to protect the username and password, which, right now, appear as plain text in the source code.
What I'm wondering, is, what is a good way to do this? If I encrypt the username and password in a text file, won't I need to store the key, in plain text, in any source code that accesses this file? And then anyone else who decides to use my class will be able to gain access to these fields.
There is no prompt where someone can enter the key, either, as this class will autonomously be used by the system.
EDIT: this will become a java lib file. But those can easily be decompiled and thus are basically the original class files anyway, right? And the people this is being protected from are fellow developers of other systems who will gain access to this lib file.
My End Goal: is to have the username and password strings not appear as plain text anywhere, and for them to be as difficult as possible to crack.
It is not possible to do this. Even if you encrypt the login/password and store it somewhere (may it be your class or an external file) you'd still need to save the encryption key somewhere in plain text. This is actually just marginally better than saving username/password in plain text, in fact I would avoid doing so as it creates a false sense of security.
So I'd suggest that your class takes username/password as a parameter and that the system which is using your class will have to care about protecting the credentials. It could do so by asking an end user to enter the credentials or store them into an external file which is only readable to the operating system user that your process is running as.
Edit: You might also think about using mechanisms such as OAuth which use tokens instead of passwords. Tokens have a limited life time and are tied to a certain purpose so they pose a good alternative to access credentials. So your end users could get an access token with their,say, Windows credentials, which is then used inside your class to call the protected service.
This is a classic authentication issue, except that here, Eve can wear Bob's skin like a suit. Is that stretching the metaphor? I'm not sure.
The short answer is that there is no true answer, because what you want is something that basically violates information theory, in that anything transmittable is copyable and thus anything accessible can be viewed as no-longer-unique. Even if you had a magic box, they could just yank out the magic box with some serious JVM hacking.
The long answer is that there are a few solutions that are almost pretty okay, by making it really quite darn hard. I suggest you read the article linked, acquaint yourself with the ideas behind SRP, the vulnerabilities the spec entails, and try to figure out how to get the right to use and implement it. The problem is still there though. It's that you want a system that ensures Bob can never become a flesh-chariot, or fall to the dark side.
Fundamentally, you're breaking the tenth law. I agree with Kork, there's no solution that really does what you want, because you're trying to solve a social problem with a technical feat, one that is quite nearly provably impossible.
There are a few ways of handling this problem. The challenge as you've noted is associating an account with this automated process. So, here are some of the possibilities (from least secure to more secure):
Encrypt the username and password with a calculated key.
The calculated key is based on something both the client and the server can infer (like machine name and IP address)
Associate an authentication token with the client (OAuth style).
The token is negotiated by a one time user interaction to set up the client
The negotiated token is used for all future requests
The negotiated token is only valid for that client on that machine using that user account (server uses socket info to determine the match)
Use multiple forms of authentication
OAuth style token
Calculated token based on time + secondary id (requires clients and servers to be synched to the same time server)
It is important to note that your security measures should be more restrictive than it is worth to crack. In short, if all the potential bad guy is only going to be able to get your food preferences of the day you might not need to be as vigilent as protecting something more high profile like a bank account. User names and paswords are not the only means of authentication.
It's not clear which code has to know the user name & password. Are these credentials just for the queue being read? If so, only the server code would need to know them. In that case, you could store them in a server file whose permissions allow only the server code to read them. The file permissions would then be enforced by the server operating system, which presuambly is much better at security than most programmers will ever be.
I know this question is long since abandoned, but I want to point out that of course you can do this by requiring typed credentials at runtime but only storing a hash of the password. Of course, it needs to be a really good hash. Use a standard one, don't make up your own. The whole point of a hash is that even if you plain text the hashed result, no one else will be able to come up with a string that yields that hash, even if they know how the hash is computed.
Of course users can try a brute force attack, and since they know the result they want they can run it fast, so you need to use a highly secure password.
This question already has answers here:
How do I securely store encryption keys in java? [closed]
(4 answers)
Closed 7 years ago.
I'm working on a software project where the application will end up being run in an untrusted environment. I have a need to perform some ancillary cryptographic signing (meaning this is not the primary means of securing data), but do not wish to leave the key in plain view as such:
private static final String privateKey = "00AABBCC....0123456789";
What method can I use to reasonably secure this? I'm aware that nothing is full proof, but this will add an extra layer in the security wall.
For clarification: I've got what is essentially a String that I don't wish to have easily pulled out in a debugger or via reflection. I'm aware that decompilation of the class file could essentially render this moot, but that's an acceptable risk.
Obviously storing the key offsite would be ideal, but I can't guarantee Internet access.
It's impossible to secure a key in an untrusted environment. You can obfuscate your code, you can create a key from arbitrary variables, whatever. Ultimately, assuming that you use the standard javax.crypto library, you have to call Mac.getInstance(), and sometime later you'll call init() on that instance. Someone who wants your key will get it.
However, I think the solution is that you tie the key to the environment, not the program. A signature is meant to say that the data originated from a known source, and has not been tampered with since that source provided it. Currently, you're trying to say "guarantee that my program produced the data." Instead, change your requirement to "guarantee that a particular user of my program produced the data." The onus is then shifted to that user to take care of his/her key.
Forget about obscuring it in the code. It will only make your software harder to read, debug and maintain. You'll also be nailed if your software has to go through a security audit.
If you can't put the key in secure storage (securely on disk, secure memory or a passphrase in someones head), don't bother with anything else.
If you're in a *nix environment, storing the key on disk with root/root 400 permissions might be "good enough".
On Windows, you could use the DPAPI to store the data in Microsofts secure memory.
You could also use a lightweight PBE to encrypt the sensitive key and have the user enter the passphrase when the application starts up.
Whose private key is that? The private key is supposed to be private, so it is wrong to distribute it.
First off - good on you for thinking about this problem!
Is it possible to instead generate a private key, communicate with your Certificate Authority and have it sign the key (and manage a CRL as well)?
As an alternative, if this is going to be running on Windows, you can use the Crypto API to securely store a private key that is marked as not-exportable. How you distribute that key securely can be another challenge though.
Can you break the private key out into two parts: store one in your program, then interactively request the second half - when your app starts ?