I am searching for a way to archive files in a revision-save way.
I imagine a java based rest service, to pass a file, which is then unchangeable stored and accessible via an URI.
How could I implement something like this? Is a Hadoop Archive a possible building block? Or is this only possible using a Content-Addressed Storage?
I think the best solution is to compute a checksum for each file and return the ID of the file together with the checksum as a combined access URL. Than each time a client requests a file via the URL (including the checksum), the service verifies the checksum again and can so garantie, that the file returned was not modified from the point of time when the file was stored and is identical to the version the client expect. The URL is the surety of the immutability of the requested file.
Also the client can verify the checksum if the client did not trust the service.
Related
I'm trying to create a new microservice with Spring Boot for uploading and downloading multiple files at once.
These files (PDF,XML,ZIP,TIFF,..), based on some conditions, can be stored inside a storage like S3 or inside another kind of storage. This microservice has to implement the logic to understand where these files are, download them temporarily in a local folder and then return them back to the client application.
The goal is to hide the recovery logic and the type of storage where the files reside to the client applications.
Each one of my business entity has several files associated so for the upload API I was thinking of using a Multipart Request to send the files of the same entity all together.
I would like to do the same for the download API: given the ID of an entity the API has to return all the files associated with it.
I don't know what's the best way to achieve this goal.
I have seen that there is a Multipart Response but I don't know if it is reliable.
Another idea is to download the files in a temporary shared folder and to send back to the client application the list of paths where they are.
Another one is to download always the files in a local (not shared) folder and to send back to the client application the list of URLs that it has to use to get them.
What do you think about it? Any other option?
Thanks for your help!
If you're looking to hide the fact that you're using S3 as the storage backing, my guess is that you're trying to either a) ensure you have the flexibility to change the storage backend at a later date, and/or b) put your own authentication in front of the upload/download to ensure users have permissions to read/write contents.
In either case, this sounds like a good job for a performant API gateway to ensure you maximize throughput. Instead of writing a custom service, you can write a configuration for something like Traefik that would a) authenticate requests, b) proxy the request to S3 directly, and c) rewrite the host and path to mask the usage of S3 as a storage backend. If you choose to use Traefik, take a look at the Routers section and the ReplacePathRegex middleware.
In my web application I have a link which, when clicked, invokes an external web service to retrieve a download URL for a file.
I need to send back to client the file which is beyond this URL, instead of the download URL retrieved from the web service. If possible, I would also like to do it without having to download the file on my server beforehand.
I've found this question about a similar task, but which used PHP with the readfile() function.
Is there a similar way to do this in Java 8?
If you doesn't even want to handle that file you should answer the request with a redirect (eg HTTP 301 or 302). If you want to handle the file you should read the file in a byte buffer and send it to the client which would make the transfer slower.
Without seeing your implementation so far, this is my best suggest.
Where should I store credentials for my java application to access third party services?
The credentials are not specific per user on my application. They are for accessing a web service my application is consuming. I know enough not to hard code them into my application, but where and how do I store them? I also assume they will need to be encrypted.
.jar file is best way to store all credentials.
Create interface where store your credentials as a final String
convert interface to jar file
Add that jar file in your build path
Implement this interface where u use credentials, and access String object in which u stored credentials.
Db
.properties file
configuration class with constant
Spring have nice functionality with #Value annotation that can auto-magically inject value from .properties file (under resources folder) with a given key.
I use that because in my case I have different key values in multiple app instances and db would require little more complexity, and furthermore I don't make unnecessary queries to db.
On security basis if attacker can read files on your server than he can easily read your db so that don't play a part here. It can be stored in any file on the system.
On the other hand you can have configuration class with
public static final String SECRET_KEY = "someKey"
To build upon #Zildyan's answer, comments and references to other answers.
There are a few options for where to store:
Database
Properties file
Constant (hard coded)
File system (away from application)
As for how to store:
Depending upon sensitivity. Credentials could be stored in plain text (low sensitivity) or should be encrypted (high sensitivity).
It should also be noted that using a combination of encryption and separating the credentials from the source you would restrict internal access to the credentials.
Some examples
a password stored in plain text may be added to source control and read by anyone with access to the source control.
An encrypted password with decryption code would be easily available to anyone able to run the code.
A plain text file stored on the server may be accessible to anyone with access to the server.
An encrypted file stored on the file system may only be accessible to sys admins and the decryption method available to devs.
The same goes for storing in a database and who has access to that database.
JNDI
Per Wikipedia:
The Java Naming and Directory Interface (JNDI) is a Java API for a directory service that allows Java software clients to discover and look up data and resources (in the form of Java objects) via a name.
Your enterprise likely has a JNDI-compatible directory service established. You would ask the sysadmin to include an entry for your particular credentials.
If you are self-administering, then your Java EE (now Jakarta EE) should have a JNDI-compatible server built-in. Learn to configure it, and add the entry for your particular credentials.
does anyone know how to create HTTP server in Java, but set default folder for web and than load files from it? I want to use com.sun.net.httpserver class.
For example, I have folder named abc next to my java file. The java file runs HTTP server under port 8080. And if I open address http://123.123.123.123:8080/ I want to see list of files from folder abc. In folder abc are some files, eg. image.jpg. So I want to open in my browser address to image file, like http://123.123.123.123:8080/image.jpg. This way I can open all other files from folder abc (also subfolders, files in subfolders etc.).
Is it possible to create this HTTP server?
Would it be somehow possible to run PHP files in the folder?
Thank you very much for your answers.
Why not using embedded Jetty? I am pretty sure that with it you can accomplish what you are looking for.
If you want to execute PHP from within Jetty, refer to http://docs.codehaus.org/display/JETTY/Jetty+and+PHP
Once you have created your server object, you need to register some handlers for the path you want the user to use to fetch documents.
HttpServer server = HttpServer.create(new InetSocketAddress("localhost",8080));
HttpHandler myDocsHandler = new MyDocsHandler();
server.createContext("/abc", myDocsHandler);
There are no built in default handlers, so you will need to write the MyDocsHandler class that implements the HttpHandler interface to handle any requests coming into your server at http://localhost:8080/abc.
The handler requires a single handle method that takes an HttpExchange argument that gives access to the request data and the response stream. It is your responsibility at this point to do what needs doing. So if you wanted the actual files to be located on your hard driver at /usr/local/abc your handler would need to open the requested file using standard file io and stream it back to the user.
A web application uploads files (images only) from client to server location no any DB and also save the same file/files from server to client's machine.
Process 1. upload a file <input type="file /">
2. save files into server predifined location : java
3. download the same files from server to client's machine by clicking on save button
problem : let suppose there are two users and they are uploading different files with same name at same time in predifined (or programmed) server's folder.
then how should i avoid this kind of naming conflict & how to programmed for, which file belongs to whom (client) .
possible sol'n : during uploading the file from client to server, create one folder for each client and save the file into specifiec newly created folder.
please note that there is no any Database in application. please suggest any better
Environment : java servlet Apache-tomcat 6.0 xhtml
Use HttpServletReqeust.getSession() method to get client's unique session and then HttpSession.getId() to get session's identifier which you can use in directory/file name construction.
create one folder for each client/user.
Seems like the obvious solution to me. Using session id will result in many more directories being created. If the server saved images are to be used later(which I assume they are, otherwise whats the point of saving them). Having a directory structure based on usernames(or similar) would be much less painful to navigate than anything else.