Update hashset from .txt while app is running - java

The goal is to block access to the page from the list of IP addresses. This list is in the file list.txt.
I made the service that checks IP from request and with HashSet of "unwanted" addresses, but subgoal is "catch on the fly" this list.txt. What I mean: if I add some IP to this file, it should be blocked without restarting application. And I have not ideas how to solve this, cause my app refreshes this list only after restart. My code is below
#Service
public class BlackListService {
public Set<String> loadBlackList() {
java.util.Set<java.lang.String> blackList = new HashSet<>();
InputStream resource = null;
try {
resource = new ClassPathResource(
"blacklist.txt").getInputStream();
} catch (IOException e) {
e.printStackTrace();
}
try (BufferedReader reader = new BufferedReader(new InputStreamReader(resource))) {
blackList = reader.lines().collect(Collectors.toSet());
for (java.lang.String address:
blackList) {
System.out.println(address);
}
} catch (IOException e) {
e.printStackTrace();
}
return blackList;
}
public boolean isNowAllowedIP(Set<String> blackList, String requestIP) {
return blackList.contains(requestIP);
}
}
And controller:
#Controller
public class MainController {
private final BlackListService blackListService;
public MainController(BlackListService blackListService) {
this.blackListService = blackListService;
}
#GetMapping("/")
public String mainPage(HttpServletRequest request, Model model) {
Set<String> blackList = blackListService.loadBlackList();
if (blackListService.isNowAllowedIP(blackList, request.getRemoteAddr())) {
Logger logger = Logger.getLogger("Access logs");
logger.warning("Access disallowed");
model.addAttribute("message", request.getRemoteAddr() + ": Access disallowed");
return "index";
}
model.addAttribute("message", "Access allowed");
return "index";
}
}
Can someone help with this "subgoal"?

In loadBlackList() you are reading a resource from the classpath. Could this be picking up a file built into your jar file or build dir which is not the file you are editing? I would try changing loadBlackList() to use FileReader and a path on the file system rather than a path within the classpath instead of InputStreamReader.

What you need is a recurring background job that will reload your blacklist after you change it. This blog will discusses a "modern" approach for doing it with Spring.
Save the last modified time for the file when your program starts and you first load it. See this for checking the file modified time.
Schedule the background job to run every minute (or 5 or whatever is frequent enough for your needs).
When the job runs check the current last updated time on the file and if its different than the saved one, then its time to reload your list.

Related

Spring Boot file upload issue - Could not store the file error only occur after few days

I have a RESTful API created using Java Spring Boot 2.4.2.
1 of the main issue that I encountered recently is that, the Multipart file upload is working fine but the same code will not work after couple of days. It will work back after restarted the RESTFul JAR application.
The error that been display in Postman:
Could not store the file. Error
The relevant code to this is here:
try {
FileUploadUtil.saveFile(uploadPath, file.getOriginalFilename(), file);
} catch (IOException e) {
throw new RuntimeException("Could not store the file. Error: " + e.getMessage());
}
And the FileUploadUtil class:
public class FileUploadUtil {
public static void saveFile(String uploadDir, String fileName, MultipartFile multipartFile) throws IOException {
Path uploadPath = Paths.get(uploadDir);
if (!Files.exists(uploadPath)) {
Files.createDirectories(uploadPath);
}
try (InputStream inputStream = multipartFile.getInputStream()) {
Path filePath = uploadPath.resolve(fileName);
Files.copy(inputStream, filePath, StandardCopyOption.REPLACE_EXISTING);
} catch (IOException ioe) {
throw new IOException("Could not save uploaded file: " + fileName, ioe);
}
}
public static File fileFor(String uploadDir, String id) {
return new File(uploadDir, id);
}}
And the main POST API method head that called the first part of the code above is:
#PostMapping(value = "/clients/details/{clientDetailsId}/files/{department}", consumes = MediaType.MULTIPART_FORM_DATA_VALUE)
#PreAuthorize("hasAuthority('PERSONNEL') or hasAuthority('CUSTODIAN') or hasAuthority('ADMIN')")
public ResponseEntity<ClientDetails> createClientDetailsFiles(#PathVariable("clientDetailsId") long clientDetailsId,
#PathVariable("department") String department,
#RequestPart(value = "FORM_SEC_58", required = false) MultipartFile[] FORM_SEC_58_file,#RequestPart(value = "FORM_SEC_78", required = false) MultipartFile[] FORM_SEC_78_file,
#RequestPart(value = "FORM_SEC_105", required = false) MultipartFile[] FORM_SEC_105_file,
#RequestPart(value = "FORM_SEC_51", required = false) MultipartFile[] FORM_SEC_51_file,
#RequestPart(value = "FORM_SEC_76", required = false) MultipartFile[] FORM_SEC_76_file)
And the application.properties side:
spring.servlet.multipart.enabled=true
spring.servlet.multipart.max-file-size=90MB
spring.servlet.multipart.max-request-size=90MB
Can anyone advise what is the issue ya?
I had the same issue, it was working file once, but after some time, it stopped working. I am almost certain your issue is this, because, we have the same code and our scenario is the same. The way I figured it out was:
Debugging
I added a statement to print the error to see what was actually going on, what you are currently doing is only taking message of error, not the whole error. So change it to:
try {
FileUploadUtil.saveFile(uploadPath, file.getOriginalFilename(), file);
} catch (IOException e) {
e.printStackTrace();
throw new RuntimeException("Could not store the file. Error: " + e.getMessage());
}
Actual Problem
And the error was FileAlreadyExistsException FileAlreadyExistsException
Basically what that means is that you are trying to upload a file with same name twice.
Solution
To fix this issue you can use different approaches. One of them is to generate UUID for file and store it also in database, to access later.

Spring boot keep properties even after new deploy

Currently, I am sending app crashes logs of Android app via HTTP to my server (acra) and my server saves them in properties like this:
#RestController
public class EndlessBlowReportController {
public int counter;
#Autowired
public static final Properties defaultProperties = new Properties();
#PostMapping("/add_report")
public void addReport(#RequestBody String report) {
try {
JSONObject jsonObject = new JSONObject(report);
defaultProperties.put(counter, report);
counter++;
} catch (Exception ex) {
System.out.println(ex.getMessage());
}
}
#GetMapping("/get_reports")
public List<String> getReports() {
List<String> reports = new ArrayList<>();
try {
for(int i=0;i<defaultProperties.size();i++) {
reports.add((String)defaultProperties.get(i));
}
} catch (Exception ex) {
}
return reports;
}
}
and it works fine until I deploy a new version of the server.
How can I keep my properties even after deploy?
The properties are only stored in memory and won't be persisted to any permanent storage, such a file or database. My recommendation would be to not store this information in properties, but instead store it in a database, or alternatively in the file storage as a file.
For example, if you went with the file solution, you could load the file during the startup and update the file each time you get new reports. By doing so, you would persist the information and it wouldn't disappear each time you restart your server.
I hope you find this answer helpful.
Good luck!

Adding logs to wso2 to track logs implemented in custom java code

Below I have a code snippet for a custom API manager mediator, I'm suppose to modify this code for our use. I'm having trouble though getting the logs out of the code when I'm running it in our wso2 environment. What would be the process to be able to the outputs of these logs. This is going to be a jar file I add to the repository/components/lib/ directory of the APIM. The jar file name is com.domain.wso2.apim.extensions. I need to be able to see whats being passed and what parts of the code are being hit for testing
public class IdentifiersLookup extends AbstractMediator implements ManagedLifecycle {
private static Log log = LogFactory.getLog(IdentifiersLookup.class);
private String propertyPrefix = "";
private String netIdPropertyToUse = "";
private DataSource ds = null;
private String DsName = null;
public void init(SynapseEnvironment synapseEnvironment) {
if (log.isInfoEnabled()) {
log.info("Initializing IdentifiersLookup Mediator");
}
if (log.isDebugEnabled())
log.debug("IdentifiersLookup: looking up datasource" + DsName);
try {
this.ds = (DataSource) new InitialContext().lookup(DsName);
} catch (NamingException e) {
e.printStackTrace();
}
if (log.isDebugEnabled())
log.debug("IdentifiersLookup: acquired datasource");
}
Add the below line to log4j.properties file resides wso2am-2.0.0/repository/conf/ folder and restart the server.
log4j.logger.com.domain.wso2.apim.extensions=INFO

How to watch file for new content and retrieve that content

I have a file with name foo.txt. This file contains some text. I want to achieve following functionality:
I launch program
write something to the file (for example add one row: new string in foo.txt)
I want to get ONLY NEW content of this file.
Can you clarify the best solution of this problem? Also I want resolve related issues: in case if I modify foo.txt I want to see diff.
The closest tool which I found in Java is WatchService but if I understood right this tool can only detect type of event happened on filesystem (create file or delete or modify).
Java Diff Utils is designed for that purpose.
final List<String> originalFileContents = new ArrayList<String>();
final String filePath = "C:/Users/BackSlash/Desktop/asd.txt";
FileListener fileListener = new FileListener() {
#Override
public void fileDeleted(FileChangeEvent paramFileChangeEvent)
throws Exception {
// use this to handle file deletion event
}
#Override
public void fileCreated(FileChangeEvent paramFileChangeEvent)
throws Exception {
// use this to handle file creation event
}
#Override
public void fileChanged(FileChangeEvent paramFileChangeEvent)
throws Exception {
System.out.println("File Changed");
//get new contents
List<String> newFileContents = new ArrayList<String> ();
getFileContents(filePath, newFileContents);
//get the diff between the two files
Patch patch = DiffUtils.diff(originalFileContents, newFileContents);
//get single changes in a list
List<Delta> deltas = patch.getDeltas();
//print the changes
for (Delta delta : deltas) {
System.out.println(delta);
}
}
};
DefaultFileMonitor monitor = new DefaultFileMonitor(fileListener);
try {
FileObject fileObject = VFS.getManager().resolveFile(filePath);
getFileContents(filePath, originalFileContents);
monitor.addFile(fileObject);
monitor.start();
} catch (InterruptedException ex) {
ex.printStackTrace();
} catch (FileNotFoundException e) {
//handle
e.printStackTrace();
} catch (IOException e) {
//handle
e.printStackTrace();
}
Where getFileContents is :
void getFileContents(String path, List<String> contents) throws FileNotFoundException, IOException {
contents.clear();
BufferedReader reader = new BufferedReader(new InputStreamReader(new FileInputStream(path), "UTF-8"));
String line = null;
while ((line = reader.readLine()) != null) {
contents.add(line);
}
}
What I did:
I loaded the original file contents in a List<String>.
I used Apache Commons VFS to listen for file changes, using FileMonitor. You may ask, why? Because WatchService is only available starting from Java 7, while FileMonitor works with at least Java 5 (personal preference, if you prefer WatchService you can use it). note: Apache Commons VFS depends on Apache Commons Logging, you'll have to add both to your build path in order to make it work.
I created a FileListener, then I implemented the fileChanged method.
That method load new contents form the file, and uses Patch.diff to retrieve all differences, then prints them
I created a DefaultFileMonitor, which basically listens for changes to a file, and I added my file to it.
I started the monitor.
After the monitor is started, it will begin listening for file changes.

Pig UDF Maxmind GeoIP Database Data File Loading Issue

The following code works when I execute the Pig script locally while specifying a local GeoIPASNum.dat file. However, it does not work when run in MapReduce distributed mode. What am I missing?
Pig job
DEFINE AsnResolver AsnResolver('/hdfs/location/of/GeoIPASNum.dat');
loaded = LOAD 'log_file' Using PigStorage() AS (ip:chararray);
columned = FOREACH loaded GENERATE AsnResolver(ip);
STORE columned INTO 'output/' USING PigStorage();
AsnResolver.java
public class AsnResolver extends EvalFunc<String> {
String ipAsnFile = null;
#Override
public String exec(Tuple input) throws IOException {
try {
LookupService lus = new LookupService(ipAsnFile,
LookupService.GEOIP_MEMORY_CACHE);
return lus.getOrg((String) input.get(0));
} catch (IOException e) {
}
return null;
}
public AsnResolver(String file) {
ipAsnFile = file;
}
...
}
The problem is that you are using a string reference to an HDFS path and the LookupService constructor can't resolve the file. It probably works when you run it locally since the LookupService has no problem with a file in your local FS.
Override the getCacheFiles method:
#Override
public List<String> getCacheFiles() {
List<String> list = new ArrayList<String>(1);
list.add(ipAsnFile + "#GeoIPASNum.dat");
return list;
}
Then change your LookupService constructor to use the Distributed Cache reference to "GeoIPASNum.dat" :
LookupService lus = new LookupService("GeoIPASNum.dat", LookupService.GEOIP_MEMORY_CACHE);
Search for "Distributed Cache" in this page of the Pig docs: http://pig.apache.org/docs/r0.11.0/udf.html
The example it shows using the getCacheFiles() method should ensure that the file is accessible to all the nodes in the cluster.

Categories

Resources