I have a servlet which uses file with data. The relative path to this file is contained in web.xml.
I have following part of code, which reads data from file:
public class LoginServlet extends HttpServlet {
private Map<String, UserData> users;
public void init() throws ServletException {
super.init();
String userFilePath = getServletContext().getInitParameter("user.access.file");
InputStream userFile = this.getClass().getResourceAsStream(userFilePath);
try {
users = readUsersFile(userFile);
} catch (IOException e) {
e.printStackTrace();
throw new ServletException(e);
}
....
....
}
private Map<String, UserData> readUsersFile(InputStream is) throws IOException{
BufferedReader fileReader = new BufferedReader(new InputStreamReader(is));
Map<String, UserData> result = new HashMap<String, UserData>();
....
....
....
return result;
}
}
Because this is a servlet and it will not work only on my PC, I can't use absolute path.
Does anyone know how I can write data to the file, using a similar way?
If the resource URL is resolveable to an absolute local disk file system path and it is writable, then you can use
URL url = this.getClass().getResource(userFilePath);
File file = new File(url.toURI().getPath());
OutputStream output = new FileOutputStream(file);
// ...
This is however in turn not guaranteed to work on all environments.
Your best bet is really to have a fixed and absolute local disk file system path. The normal practice is however to store structured data (usernames/password) in a database and not a file.
Related
How to download multiple files in a zip folder. I am using spring-boot and documents are saved in MongoDB using GridFS.
I was trying to download using FileSystemResource which takes File as an argument taking reference from https://simplesolution.dev/spring-boot-download-multiple-files-as-zip-file/
I tried to get download a resource from mongodb using below line of code and convert it into File object using
gridFsTemplate.getResource(GridFsFile).getFile();
I but it throws an error saying
GridFS resource can not be resolved to an absolute file path.
I have done using ByteArrayResource:
public void downloadZipFile(HttpServletResponse response, List<String> listOfDocIds) {
response.setContentType("application/zip");
response.setHeader("Content-Disposition", "attachment; filename=download.zip");
List<FileResponse> listOfFiles = myService.bulkDownload(listOfDocIds);// This call will fetch docs in the form of byte[] based on docIds.
try(ZipOutputStream zipOutputStream = new ZipOutputStream(response.getOutputStream())) {
for(FileResponse fileName : listOfFiles) {
ByteArrayResource fileSystemResource = new ByteArrayResource(fileName.getFileAsBytes);
ZipEntry zipEntry = new ZipEntry(fileName.getFileName());
zipEntry.setSize(fileSystemResource.contentLength());
zipEntry.setTime(System.currentTimeMillis());
zipOutputStream.putNextEntry(zipEntry);
StreamUtils.copy(fileSystemResource.getInputStream(), zipOutputStream);
zipOutputStream.closeEntry();
}
zipOutputStream.finish();
} catch (IOException e) {
logger.error(e.getMessage(), e);
}
}
class FileResponse{
private String fileName;
private byte[] fileAsBytes;
// setter and getters
}
I am able to upload multiple files to s3 bucket at once. However there is a mismatch in the file name the one I provided and uploaded file. I am interested in file name as I need to generate cloud front signed url based on that.
File generation code
final String fileName = System.currentTimeMillis() + pictureData.getFileName();
final File file = new File(fileName); //fileName is -> 1594125913522_image1.png
writeByteArrayToFile(img, file);
AWS file upload code
public void uploadMultipleFiles(final List<File> files) {
final TransferManager transferManager = TransferManagerBuilder.standard().withS3Client(amazonS3).build();
try {
final MultipleFileUpload xfer = transferManager.uploadFileList(bucketName, null, new File("."), files);
xfer.waitForCompletion();
} catch (InterruptedException exception) {
if (LOGGER.isInfoEnabled()) {
LOGGER.info("InterruptedException occurred=>" + exception);
}
} catch (AmazonServiceException exception) {
if (LOGGER.isInfoEnabled()) {
LOGGER.info("AmazonServiceException occurred =>" + exception);
}
throw exception;
}
}
Uploaded file name is 94125913522_image1.png. As you can see first two characters disappeared. What am I missing here. I am not able to figure out. Kindly advice.
private static void writeByteArrayToFile(final byte[] byteArray, final File file) {
try (OutputStream outputStream = new BufferedOutputStream(Files.newOutputStream(Paths.get(file.getName())))) {
outputStream.write(byteArray);
} catch (IOException exception) {
throw new FileIllegalStateException("Error while writing image to file", exception);
}
}
The reason of the problem
You lose the first two charecters of the file names because of the third argument of this method:
transferManager.uploadFileList(bucketName, null, new File("."), files);
What happens in this case
So, what is the third argument:
/**
...
* #param directory
* The common parent directory of files to upload. The keys
* of the files in the list of files are constructed relative to
* this directory and the virtualDirectoryKeyPrefix.
...
*/
public MultipleFileUpload uploadFileList(... , File directory, ...){...}
And how will it be used:
...
int startingPosition = directory.getAbsolutePath().length();
if (!(directory.getAbsolutePath().endsWith(File.separator)))
startingPosition++;
...
String key = f.getAbsolutePath().substring(startingPosition)...
Thus, the directory variable is used to define a starting index to trim file paths to get file keys.
When you pass new File(".") as a directory, the parent directory for your files will be {your_path}.
But this is a directory, and you need to work with files inside it. So the common part, retrieved from your directory file, is {your_path}./
That is 2 symbols more than you actually need. And for this reason this method trims the 2 extra characters - an extra shift of two characters when trimming the file path.
The solution
If you only need to work with the current directory, you can pass the current directory as follows:
MultipleFileUpload upload = transferManager.uploadFileList(bucketName, "",
System.getProperty("user.dir"), files);
But if you start working with external sources, it won't work. So you can use this code, which creates one MultipleFileUpload per group of files from one directory.
private final String PATH_SEPARATOR = File.separator;
private String bucketName;
private TransferManager transferManager;
public void uploadMultipleFiles(String prefix, List<File> filesToUpload){
Map<File, List<File>> multipleUploadArguments =
getMultipleUploadArguments(filesToUpload);
for (Map.Entry<File, List<File>> multipleUploadArgument:
multipleUploadArguments.entrySet()){
try{
MultipleFileUpload upload = transferManager.uploadFileList(
bucketName, prefix,
multipleUploadArgument.getKey(),
multipleUploadArgument.getValue()
);
upload.waitForCompletion();
} catch (InterruptedException ex) {
throw new RuntimeException(ex);
}
}
}
private Map<File, List<File>> getMultipleUploadArguments(List<File> filesToUpload){
return filesToUpload.stream()
.collect(Collectors.groupingBy(this::getDirectoryPathForFile));
}
private File getDirectoryPathForFile(File file){
String filePath = file.getAbsolutePath();
String directoryPath = filePath.substring(0, filePath.lastIndexOf(PATH_SEPARATOR));
return new File(directoryPath);
}
I am currently trying to resolve the following vulnerability:
Improper Neutralization of Input During Web Page Generation ('Cross-site Scripting')
I have searched many posts and documentation of all kinds and in all situations it is only explained how to solve it in cases where you have a front.
In my case it is only a microservice that communicates with other microservices, and the only validation I do is of the file name. But not the content of the file. Since it is a microservice in charge of uploading an infinity of files of all kinds to a repository.
#PostMapping("uploadFile")
public ResponseEntity<String> uploadDatos(#RequestParam MultipartFile file,
#RequestParam(required = false) String directory)
{
File fileCast= service.multipartfileToFile(file);
if (directory == null)
return new ResponseEntity<String>(service.uploadFile(fileCast, ""), HttpStatus.OK);
else
return new ResponseEntity<String>(service.uploadFile(fileCast, directory), HttpStatus.OK);
}
The method of cast multipart file to file.
public File multipartfileToFile(MultipartFile file) {
String filename = FilenameUtils.normalize(file.getOriginalFilename());
// This utils is mine for extra validation
if (Utils.isValidFilename(filename)) {
throw new IllegalArgumentException();
}
File convFile = new File(filename);
FileOutputStream fos = null;
try {
fos = new FileOutputStream(convFile);
fos.write(file.getBytes());
fos.close();
}
.... // Catch clauses...
return convFile;
}
And honestly I don't know what to do to validate the file, it's probably silly
I want to parse a huge file in RDF4J using the following code but I get an exception due to parser limit;
public class ConvertOntology {
public static void main(String[] args) throws RDFParseException, RDFHandlerException, IOException {
String file = "swetodblp_april_2008.rdf";
File initialFile = new File(file);
InputStream input = new FileInputStream(initialFile);
RDFParser parser = Rio.createParser(RDFFormat.RDFXML);
parser.setPreserveBNodeIDs(true);
Model model = new LinkedHashModel();
parser.setRDFHandler(new StatementCollector(model));
parser.parse(input, initialFile.getAbsolutePath());
FileOutputStream out = new FileOutputStream("swetodblp_april_2008.nt");
RDFWriter writer = Rio.createWriter(RDFFormat.TURTLE, out);
try {
writer.startRDF();
for (Statement st: model) {
writer.handleStatement(st);
}
writer.endRDF();
}
catch (RDFHandlerException e) {
}
finally {
out.close();
}
}
The parser has encountered more than "100,000" entity expansions in this document; this is the limit imposed by the application.
I execute my code as following as suggested on the RDF4J web site to set up the two parameters (as in the following command)
mvn -Djdk.xml.totalEntitySizeLimit=0 -DentityExpansionLimit=0 exec:java
any help please
The error is due to the Apache Xerces XML parser, rather than the default JDK XML parser.
So Just delete Xerces XML folder from you .m2 repository and the code works fine.
I am new to Spring-boot/Java and trying to read the contents of a file in a String.
What's the issue:
I'm getting "File not found exception" and unable to read the file. Apparently, I'm not giving the correct file path.
i've attached the directory structure and my code. I'm in FeedProcessor file and want to read feed_template.php (see image)
public static String readFileAsString( ) {
String text = "";
try {
// text = new String(Files.readAllBytes(Paths.get("/src/main/template/feed_template_head.php")));
text = new String(Files.readAllBytes(Paths.get("../../template/feed_template_head.php")));
} catch (IOException e) {
e.printStackTrace();
}
return text;
}
You need to put template folder inside resource folder. And then use following code.
#Configuration
public class ReadFile {
private static final String FILE_NAME =
"classpath:template/feed_template_head.php";
#Bean
public void initSegmentPerformanceReportRequestBean(
#Value(FILE_NAME) Resource resource,
ObjectMapper objectMapper) throws IOException {
new BufferedReader(resource.getInputStream()).lines()
.forEach(eachLine -> System.out.println(eachLine));
}
}
I suggest you to go though once Resource topic in spring.
https://docs.spring.io/spring/docs/3.0.x/spring-framework-reference/html/resources.html