I have this rest and I'm trying to mock some responses
I'm working on a WebSphere server with Spring Boot
#RequestMapping(method = RequestMethod.GET, value = "", produces = {MediaType.APPLICATION_JSON_VALUE, "application/hal+json"})
public Resources<String> getAssemblyLines() throws IOException {
String fullMockPath = servletContext.getContextPath() + "\\assets\\services-mocks\\assembly-lines\\get-assembly-lines.ok.json";
List<String> result = new ArrayList<String>();
result.add(fullMockPath);
try {
byte[] rawJson = Files.readAllBytes(Paths.get(fullMockPath));
Map<String, String> mappedJson = new HashMap<String, String>();
String jsonMock = new ObjectMapper().writeValueAsString(mappedJson);
result = new ArrayList<String>();
result.add(jsonMock);
} catch (IOException e) {
result.add("Not found");
result.add(e.getMessage());
}
return new Resources<String>(result,
linkTo(methodOn(this.getClass()).getAssemblyLines()).withSelfRel());
}
I get
FileNotFoundException
Tired Tushinov's solution
System.getProperty("user.dir");
But that returns the PATH of my server, not of my document root (and yes, they're in different folders)
How can understand my base path?
To your question How can understand my base path?. You can use:
System.getProperty("user.dir")
System.getProperty("user.dir")will return the path to your project.
Example output:
C:\folder_with_java_projects\CURRENT_PROJECT
So if the file is inside your project folder you can just do the following:
System.getProperty("user.dir") + "somePackage\someJson.json";
Related
I have a sql query.
this is my abc.class
public class Abc {
private Integer ABC_ID;
private String RECIP_ID, RECIP_FIRST_NAME, RECIP_MIDDLE_NAME;
public Abc() {
//super();
// TODO Auto-generated constructor stub
}
}
and the rest are the getters and setters;
#RequestMapping(value = "/abc/export", method = RequestMethod.GET)
#ResponseBody
public ResponseEntity<List<abc>> getProviderExport(#RequestParam String id, HttpServletResponse response){
List<abc> abcdetails = jdbcTemplate.query(sqlQuery, new BeanPropertyRowMapper<abc>(abc.class));
HttpHeaders responseHeaders = new HttpHeaders();
//responseHeaders.setContentType(MediaType.TEXT_PLAIN);
String filename1 = "output.txt";
//responseHeaders.setContentType("text/plain");
response.setCharacterEncoding("UTF-8");
responseHeaders.add("Content-Disposition","attachment; filename="+filename1);
return new ResponseEntity<>(abcdetails, responseHeaders, HttpStatus.OK);
}
when I execute the above code and run the api, my api prompts to save the output file as output.txt, but when i open the output file, the content is still in json format. can anyone help me on how to achieve the output.txt file that is using tab delimited and uppercase.
Because you return ResponseEntity<List<abc>>. A List<> is converted automatically to array JSON. You need to return a ResponseEntity<String>. So, convert your List<abc> to String.
Something like this :
final StringBuilder textData = new StringBuilder();
for (final Abc abcDetail : abcdetails) {
final String line = abcDetail.getRECIP_FIRST_NAME() + " " + abcDetail.getRECIP_ID() + " ";//You can add your other field
textData.append(line).append("\n");
}
return new ResponseEntity<>(textData.toString(), responseHeaders, HttpStatus.OK);
You have to convert that json format to text format using Jackson JSON Java Parser I am attaching a link for a brief example on how to convert jsno to txt file with tab delimited and uppercase.
Here is the link : https://www.journaldev.com/2324/jackson-json-java-parser-api-example-tutorial
I've been trying to figure out how to use WordNet synonyms with a search function I'm developing which uses Hibernate Search 5.6.1. At first, I thought about using Hibernate Search annotations:
#TokenFilterDef(factory = SynonymFilterFactory.class, params = {#Parameter(name = "ignoreCase", value = "true"),
#Parameter(name = "expand", value = "true"),#Parameter(name = "synonyms", value = "synonymsfile") })
However, this requires an actual file populated with synonyms. From WordNet I was only able to get ".pl" files. So I tried manually making a SynonymAnalyzer class which would read from the ".pl" file:
public class SynonymAnalyzer extends Analyzer {
#Override
protected TokenStreamComponents createComponents(String fieldName) {
final Tokenizer source = new StandardTokenizer();
TokenStream result = new StandardFilter(source);
result = new LowerCaseFilter(result);
SynonymMap wordnetSynonyms = null;
try {
wordnetSynonyms = loadSynonyms();
} catch (IOException e) {
e.printStackTrace();
}
result = new SynonymFilter(result, wordnetSynonyms, false);
result = new StopFilter(result, StopAnalyzer.ENGLISH_STOP_WORDS_SET);
return new TokenStreamComponents(source, result);
}
private SynonymMap loadSynonyms() throws IOException {
File file = new File("synonyms\\wn_s.pl");
InputStream stream = new FileInputStream(file);
Reader reader = new InputStreamReader(stream);
SynonymMap.Builder parser = null;
parser = new WordnetSynonymParser(true, true, new StandardAnalyzer(CharArraySet.EMPTY_SET));
try {
((WordnetSynonymParser) parser).parse(reader);
} catch (ParseException e) {
e.printStackTrace();
}
return parser.build();
}
}
The problem with this method is that I'm getting java.lang.OutOfMemoryError which I'm assuming is because there's too many synonyms or something? What is the proper way to do this, everywhere I've looked online has suggested using WordNet but I can't seem to find an example with Hibernate Search Annotations. Any help is appreciated, thanks!
The wordnet format is actually supported by SynonymFilterFactory. You're simply missing the "format" parameter in your annotation configuration; by default, the factory uses the Solr format.
Change your annotation to this:
#TokenFilterDef(
factory = SynonymFilterFactory.class,
params = {
#Parameter(name = "ignoreCase", value = "true"),
#Parameter(name = "expand", value = "true"),
#Parameter(name = "synonyms", value = "synonymsfile"),
#Parameter(name = "format", value = "wordnet") // Add this
}
)
Also, make sure that the value of the "synonyms" parameter is the path of a file in your classpath (e.g. "com/acme/synonyms.pl", or just "synonyms.pl" if the file is at the root of your "resources" directory).
In general when you have an issue with the parameters of a Lucene filter/tokenizer factory, your best bet is having a look at the source code of that factory, or having a look at this page.
I have a folder named collect, there will be some files such as selectData01.json, selectData02.json, selectData03.json and so on.
I have to count the account of the files at first, and then I will send a different file every minute.
Now I want to konw how to achieve the purpose
public String getData() {
String strLocation = new SendSituationData().getClass().getProtectionDomain().getCodeSource().getLocation().getPath();
log.info("strLocation = ");
// String strParent = new File(strLocation).getParent() + "/collectData/conf.properties";
// System.out.println("strParent = " + strParent);
File fileConf = new File("collect/");
System.out.println("fileConf = " + fileConf.getAbsolutePath());
List<List<String>> listFiles = new ArrayList<>();
//File root = new File(DashBoardListener.class.getClassLoader().getResource("collectData/").getPath());
//File root = new File("collectData/application.conf");
File root = new File(fileConf.getAbsolutePath());
System.out.println("root.listFiles( ) = " + root.listFiles( ));
Arrays
.stream(Objects.requireNonNull(root.listFiles( )))
.filter(file -> file.getName().endsWith("json"))
.map(File::toPath)
.forEach(path -> {
try {
//List<String> lines = Files.readAllLines(path);
//System.out.println("lines = " + lines);
List<String> lines = Files.readAllLines(path);
listFiles.add(lines);
} catch (IOException e) {
e.printStackTrace( );
}
});
String dataBody = listToString(listFiles.get(0));
//log.info(dataBody);
ResultMap result = buildRsult();
//String jsonString = JSON.toJSONString(result);
}
public static String listToString(List<String> stringList){
if (stringList == null) {
return null;
}
StringBuilder result=new StringBuilder();
boolean flag=false;
for (String string : stringList) {
if (flag) {
result.append("");
}else {
flag=true;
}
result.append(string);
}
return result.toString();
}
supplement
My friend, maybe i don't express my purpose explicitly. If I have three files, I will sent the first file in the 0:00, sent the second file in the 0:01, sent the third file in the 0:03, sent the first file in the 0:04, sent the second file in the 0:05 and so on.
If I have five files, I will sent the first file in the 0:00, sent the second file in the 0:01, sent the third file in the 0:03, sent the fourth file in the 0:04, sent the fifth file in the 0:05 and so on.
I want to know how to achieve the function
supplement
I have a struct Project that contains a folder named collect. Each file represents a string.
At first, I want to calculate the number of files in collect folder, and then I will send a file every minute.
Any suggestions?
I would use Apache camel with file2 component.
http://camel.apache.org/file2.html
Please read about 'noop' option before running any tests.
Processed files are deleted by default as far as I remember.
Update - simple example added:
I would recommend to start with https://start.spring.io/
Add at least two dependencies: Web and Camel (requires Spring Boot >=1.4.0.RELEASE and <2.0.0.M1)
Create new route, you can start from this example:
#Component
public class FileRouteBuilder extends RouteBuilder {
public static final String DESTINATION = "file://out/";
public static final String SOURCE = "file://in/?noop=true";
#Override
public void configure() throws Exception {
from(SOURCE)
.process(exchange -> {
//your processing here
})
.log("File: ${file:name} has been sent to: " + DESTINATION)
.to(DESTINATION);
}
}
My output:
2018-03-22 15:24:08.917 File: test1.txt has been sent to: file://out/
2018-03-22 15:24:08.931 File: test2.txt has been sent to: file://out/
2018-03-22 15:24:08.933 File: test3.txt has been sent to: file://out/
I'm trying to upload multiple files to Amazon S3 all under the same key, by appending the files. I have a list of file names and want to upload/append the files in that order. I am pretty much exactly following this tutorial but I am looping through each file first and uploading that in part. Because the files are on hdfs (the Path is actually org.apache.hadoop.fs.Path), I am using the input stream to send the file data. Some pseudocode is below (I am commenting the blocks that are word for word from the tutorial):
// Create a list of UploadPartResponse objects. You get one of these for
// each part upload.
List<PartETag> partETags = new ArrayList<PartETag>();
// Step 1: Initialize.
InitiateMultipartUploadRequest initRequest = new InitiateMultipartUploadRequest(
bk.getBucket(), bk.getKey());
InitiateMultipartUploadResult initResponse =
s3Client.initiateMultipartUpload(initRequest);
try {
int i = 1; // part number
for (String file : files) {
Path filePath = new Path(file);
// Get the input stream and content length
long contentLength = fss.get(branch).getFileStatus(filePath).getLen();
InputStream is = fss.get(branch).open(filePath);
long filePosition = 0;
while (filePosition < contentLength) {
// create request
//upload part and add response to our list
i++;
}
}
// Step 3: Complete.
CompleteMultipartUploadRequest compRequest = new
CompleteMultipartUploadRequest(bk.getBucket(),
bk.getKey(),
initResponse.getUploadId(),
partETags);
s3Client.completeMultipartUpload(compRequest);
} catch (Exception e) {
//...
}
However, I am getting the following error:
com.amazonaws.services.s3.model.AmazonS3Exception: The XML you provided was not well-formed or did not validate against our published schema (Service: Amazon S3; Status Code: 400; Error Code: MalformedXML; Request ID: 2C1126E838F65BB9), S3 Extended Request ID: QmpybmrqepaNtTVxWRM1g2w/fYW+8DPrDwUEK1XeorNKtnUKbnJeVM6qmeNcrPwc
at com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:1109)
at com.amazonaws.http.AmazonHttpClient.executeOneRequest(AmazonHttpClient.java:741)
at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:461)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:296)
at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3743)
at com.amazonaws.services.s3.AmazonS3Client.completeMultipartUpload(AmazonS3Client.java:2617)
If anyone knows what the cause of this error might be, that would be greatly appreciated. Alternatively, if there is a better way to concatenate a bunch of files into one s3 key, that would be great as well. I tried using java's builtin SequenceInputStream but that did not work. Any help would be greatly appreciated. For reference, the total size of all the files could be as large as 10-15 gb.
I know it's probably a bit late but worth giving my contribution.
I've managed to solve a similar problem using the SequenceInputStream.
The tricks is in being able to calculate the total size of the result file and then feeding the SequenceInputStream with an Enumeration<InputStream>.
Here's some example code that might help:
public void combineFiles() {
List<String> files = getFiles();
long totalFileSize = files.stream()
.map(this::getContentLength)
.reduce(0L, (f, s) -> f + s);
try {
try (InputStream partialFile = new SequenceInputStream(getInputStreamEnumeration(files))) {
ObjectMetadata resultFileMetadata = new ObjectMetadata();
resultFileMetadata.setContentLength(totalFileSize);
s3Client.putObject("bucketName", "resultFilePath", partialFile, resultFileMetadata);
}
} catch (IOException e) {
LOG.error("An error occurred while combining files. {}", e);
}
}
private Enumeration<? extends InputStream> getInputStreamEnumeration(List<String> files) {
return new Enumeration<InputStream>() {
private Iterator<String> fileNamesIterator = files.iterator();
#Override
public boolean hasMoreElements() {
return fileNamesIterator.hasNext();
}
#Override
public InputStream nextElement() {
try {
return new FileInputStream(Paths.get(fileNamesIterator.next()).toFile());
} catch (FileNotFoundException e) {
System.err.println(e.getMessage());
throw new RuntimeException(e);
}
}
};
}
Hope this helps!
I'm using Nutch to crawl some websites (as a process that runs separate of everything else), while I want to use a Java (Scala) program to analyse the HTML data of websites using Jsoup.
I got Nutch to work by following the tutorial (without the script, only executing the individual instructions worked), and I think it's saving the websites' HTML in the crawl/segments/<time>/content/part-00000 directory.
The problem is that I cannot figure out how to actually read the website data (URLs and HTML) in a Java/Scala program. I read this document, but find it a bit overwhelming since I've never used Hadoop.
I tried to adapt the example code to my environment, and this is what I arrived at (mostly by guesswprk):
val reader = new MapFile.Reader(FileSystem.getLocal(new Configuration()), ".../apache-nutch-1.8/crawl/segments/20140711115438/content/part-00000", new Configuration())
var key = null
var value = null
reader.next(key, value) // test for a single value
println(key)
println(value)
However, I am getting this exception when I run it:
Exception in thread "main" java.lang.NullPointerException
at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:1873)
at org.apache.hadoop.io.MapFile$Reader.next(MapFile.java:517)
I am not sure how to work with a MapFile.Reader, specifically, what constructor parameters I am supposed to pass to it. What Configuration objects am I supposed to pass in? Is that the correct FileSystem? And is that the data file I'm interested in?
Scala:
val conf = NutchConfiguration.create()
val fs = FileSystem.get(conf)
val file = new Path(".../part-00000/data")
val reader = new SequenceFile.Reader(fs, file, conf)
val webdata = Stream.continually {
val key = new Text()
val content = new Content()
reader.next(key, content)
(key, content)
}
println(webdata.head)
Java:
public class ContentReader {
public static void main(String[] args) throws IOException {
Configuration conf = NutchConfiguration.create();
Options opts = new Options();
GenericOptionsParser parser = new GenericOptionsParser(conf, opts, args);
String[] remainingArgs = parser.getRemainingArgs();
FileSystem fs = FileSystem.get(conf);
String segment = remainingArgs[0];
Path file = new Path(segment, Content.DIR_NAME + "/part-00000/data");
SequenceFile.Reader reader = new SequenceFile.Reader(fs, file, conf);
Text key = new Text();
Content content = new Content();
// Loop through sequence files
while (reader.next(key, content)) {
try {
System.out.write(content.getContent(), 0,
content.getContent().length);
} catch (Exception e) {
}
}
}
}
Alternatively, you can use org.apache.nutch.segment.SegmentReader (example).