Dropwizard Path: Get all paths in app - java

Does anyone know how to get the same information, about what paths are used, like at the start of dw application. I mean the output after this line:
io.dropwizard.jersey.DropwizardResourceConfig: The following paths were found for the configured resources:
GET /path/of/res/test (this.is.the.class.package.info.MyRessource)
POST /path/of/res/test2 (this.is.the.class.package.info.MyRessource2)
I have to check if specific path exists.

You'll have to do this on your own. Take a look at the logEndpoints method (which is what actually logs this information - with private methods). You should be able to adapt this method to handle the resources from your environment.jersey().getResourceConfig() after you configure your resources in your run method.
Something like:
final ImmutableList.Builder<Class<?>> builder = ImmutableList.builder();
for (Object o : environment.jersey().getResourceConfig().getSingletons()) {
if (o.getClass().isAnnotationPresent(Path.class)) {
builder.add(o.getClass());
}
}
for (Class<?> klass : environment.jersey().getResourceConfig().getClasses()) {
if (klass.isAnnotationPresent(Path.class)) {
builder.add(klass);
}
}
final List<String> endpoints = Lists.newArrayList();
for (Class<?> klass : builder.build()) {
AbstractResource resource = IntrospectionModeller.createResource(klass);
endpoints.add(resource.getPath().getValue());
}
Note that what's in master is slightly ahead of what's in Maven - the above example shows how to get the AbstractResource which will work with 0.7.1. You'll have to be sure to adapt your method as dropwizard evolves. This example also doesn't normalize the path but I you can easily add that based on logEndpoints.

This solution works for me (DW 0.7.1):
private Multimap<String, String> getEndpoints(Environment environment)
{
Multimap<String, String> resources = ArrayListMultimap.create();
ResourceConfig jrConfig = environment.jersey().getResourceConfig();
Set<Object> dwSingletons = jrConfig.getSingletons();
for (Object singletons : dwSingletons) {
if (singletons.getClass().isAnnotationPresent(Path.class)) {
AbstractResource resource = IntrospectionModeller.createResource(singletons.getClass());
AbstractResource superResource = IntrospectionModeller.createResource(singletons.getClass().getSuperclass());
String uriPrefix = getStringWithoutStartingSlash(resource.getPath().getValue());
for (AbstractResourceMethod srm :resource.getResourceMethods())
{
String uri = uriPrefix;
resources.put(uri,srm.getHttpMethod());
LOG.info("Found http method " +srm.getHttpMethod() + " for the path " + uri + " returning (class) " + srm.getReturnType().getName());
}
for (AbstractSubResourceMethod srm :resource.getSubResourceMethods())
{
//extended resources methods will be added by hand
if(superResource != null){
for (AbstractSubResourceMethod superSrm : superResource.getSubResourceMethods())
{
String srmPath = getStringWithoutStartingSlash(srm.getPath().getValue());
String superSrmPath = getStringWithoutStartingSlash(superSrm.getPath().getValue());
Class<?> srmClass = srm.getDeclaringResource().getResourceClass();
Class<?> superSrmClass = superSrm.getDeclaringResource().getResourceClass();
//add superclass method if methodName is not equal superMethodName
if(srmClass.getSuperclass().equals(superSrmClass) && !srm.getMethod().getName().equals(superSrm.getMethod().getName())){
String uri = uriPrefix + "/" + srmPath + "/" + superSrmPath ;
resources.put(uri,superSrm.getHttpMethod());
LOG.info("Found http method " +superSrm.getHttpMethod() + " for the path " + uri + " returning (class) " + superSrm.getReturnType().getName());
}
}
}
String uri = uriPrefix + "/" + getStringWithoutStartingSlash(srm.getPath().getValue());
resources.put(uri,srm.getHttpMethod());
LOG.info("Found http method " +srm.getHttpMethod() + " for the path " + uri + " returning (class) " + srm.getReturnType().getName());
}
}
}
return resources;
}
But #PathParam annoations are also plain, e.g. if #Path("/{id}") then sth. like '.../{id}' will be used!!!
If you extend your resources and super class does also have path annotation, then this method will produce also informations and even more than the default DW logEndpoints() method!
FYI: The imports used in class
import java.util.Set;
import javax.ws.rs.Path;
import com.google.common.collect.ArrayListMultimap;
import com.google.common.collect.Multimap;
import com.sun.jersey.api.core.ResourceConfig;
import com.sun.jersey.api.model.AbstractResource;
import com.sun.jersey.api.model.AbstractResourceMethod;
import com.sun.jersey.api.model.AbstractSubResourceMethod;
import com.sun.jersey.server.impl.modelapi.annotation.IntrospectionModeller;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import io.dropwizard.setup.Environment;

I used a simpler approach for getting the same data. All the resources here are jersey resources.
Map<String, Object> beansWithAnnotation = applicationContext.getBeansWithAnnotation(Path.class);
Collection<Object> values = beansWithAnnotation.values();
for (Object next : values) {
ResourceUtil.getResourceUrls(next);
}
public static List<String> getResourceUrls(Object obj)
{
Resource resource = Resource.from(obj.getClass());
String uriPrefix = resource.getPath();
List<String> urls = new ArrayList<>();
for (Resource res :resource.getChildResources())
{
String uri = uriPrefix + res.getPath();
urls.add(uri);
}
return urls;
}

Related

mule message.getInvocationProperty cannot be resolved from within Java method

I'm trying to access a mule flowVar from within a Java class:
In the mule processor:
flowVars.rootFilePath="c:\test"
From within the mule processor, I'm calling the java method renameFile(oldFile, newFile) :
package com.rename;
import java.io.File;
import org.mule.api.MuleMessage;
public class FileRename {
public String renameFile(String oldFile, String newFile) {
File file1 = new File(message.getInvocationProperty("rootFilePath") + oldFile);
File file2 = new File(message.getInvocationProperty("rootFilePath") + newFile);
file1.renameTo(file2);
return "Renaming " + oldFile + " to: " + newFile;
}
}
However, I'm receiving the error "message cannot be resolved". What am I missing here? Your help is very much appreciated!
Why can't you use onCall Method to do this?
You can use below code as a sample to access message.
public class MyComponent implements Callable {
#Override
public Object onCall(MuleEventContext eventContext) throws Exception {
String oldFile = eventContext.getMessage().getProperty('');
return "Renaming " + oldFile + " to: " + newFile;";
}
}

Sonar-ws-client returning 401 error even though I can perform a search in the UI

I have the following source code:
package com.sample.sonar.report;
import org.sonar.wsclient.Host;
import org.sonar.wsclient.Sonar;
import org.sonar.wsclient.SonarClient;
import org.sonar.wsclient.connectors.HttpClient4Connector;
import org.sonar.wsclient.services.*;
import org.sonar.wsclient.issue.*;
import java.util.List;
public class App {
public static void main(String args[]) {
try {
String url = "http://sample.url.com:9000";
String login = "userid";
String password = "password";
SonarClient client = SonarClient.create(url);
client.builder().login(login);
client.builder().password(password);
IssueQuery query = IssueQuery.create();
query.rules("S1081");
query.languages("c");
IssueClient issueClient = client.issueClient();
System.out.println("About to Run query\n");
Issues issues = issueClient.find(query);
System.out.println("Ran query\n");
List<Issue> issueList = issues.list();
for (int i = 0; i < issueList.size(); i++) {
System.out.println(issueList.get(i).projectKey() + " " +
issueList.get(i).componentKey() + " " +
issueList.get(i).line() + " " +
issueList.get(i).ruleKey() + " " +
issueList.get(i).severity() + " " +
issueList.get(i).message());
}
} catch (Exception ex) {
System.out.println(ex);
}
}
}
Although I can log into Sonar and search the results I am getting the following runtime error
org.sonar.wsclient.base.HttpException: Error 401 on http://sample.url.com:9000/api/issues/search?languages=c&rules=S1081
I believe this is a 401 error which maps to Unauthorized. Am I missing some sort of authentication beyond what I am doing? Thanks in advance.
SonarClient#builder() is a static method which returns a new builder instance each time it is called. So basically, when you write:
SonarClient client = SonarClient.create(url);
client.builder().login(login);
client.builder().password(password);
your are setting the login on one Builder object, setting the password on another and doing nothing from any of them.
The object you are using is the SonarClient create by the method SonarClient#create(String url) which, indeed, has no login nor password, hence the 401 HTTP error.
What you should write is:
SonarClient client = SonarClient.builder()
.url(url)
.login(login)
.password(password)
.build();

Introspecting Jersey resource model Jersey 2.x

I have written my own scanner to go through my JAX-RS resources and print out the method names and paths using jersey-server-1.18.1. The problem is when I migrate my same code to 2.16 (changing the package names from com.sun.* to org.glassfish.*), It just won't work.
Digging deep I found that those required jersey-server classes are no long public. Anyone knows the reason why? And how can I migrate my code below from 1.x to 2.x ? There is literally no documentation on this migration.
All help appreciated! Below is the code with 1.x
import com.wordnik.swagger.annotations.ApiOperation;
import com.sun.jersey.api.model.AbstractResource;
import com.sun.jersey.api.model.AbstractResourceMethod;
import com.sun.jersey.api.model.AbstractSubResourceLocator;
import com.sun.jersey.api.model.AbstractSubResourceMethod;
import com.sun.jersey.server.impl.modelapi.annotation.IntrospectionModeller;
/*
* To change this license header, choose License Headers in Project Properties.
* To change this template file, choose Tools | Templates
* and open the template in the editor.
*/
/**
*
* #author shivang
*/
public class Apiscanner {
public static void main(String[] args) {
Apiscanner runClass = new Apiscanner();
runClass.xyz();
}
public void xyz() {
AbstractResource resource = IntrospectionModeller.createResource(BaseResource.class);
String uriPrefix = resource.getPath().getValue();
abc(uriPrefix, resource);
}
public void abc(String uriPrefix, AbstractResource resource) {
for (AbstractResourceMethod srm : resource.getResourceMethods()) {
String uri = uriPrefix;
System.out.println(srm.getHttpMethod() + "\t" + uri);
}
for (AbstractSubResourceMethod srm : resource.getSubResourceMethods()) {
String uri = uriPrefix + srm.getPath().getValue();
ApiOperation op = srm.getAnnotation(ApiOperation.class);
System.out.println(srm.getHttpMethod() + "\t" + uri);
}
if (resource.getSubResourceLocators() != null && !resource.getSubResourceLocators().isEmpty()) {
for (AbstractSubResourceLocator subResourceLocator : resource.getSubResourceLocators()) {
ApiOperation op = subResourceLocator.getAnnotation(ApiOperation.class);
AbstractResource childResource = IntrospectionModeller.createResource(op.response());
String path = subResourceLocator.getPath().getValue();
String pathPrefix = uriPrefix + path;
abc(pathPrefix, childResource);
}
}
}
}
The new APIs for Jersey 2.x, can mainly be found in the org.glassfish.jersey.server.model package.
Some equivalents I can think of:
AbstractResource == Resource
IntrospectionModeller.createResource == I believe Resource.from(BaseResource.class)
AbstractResourceMethod == ResourceMethod
resource.getSubResourceMethods() == getChildResources(), which actually just returns a List<Resource>
AbstractSubResourceLocator == Doesn't seem to exist. We would simply check the above child resource to see if it is a locator
for (Resource childResource: resource.getChildResources()) {
if (childResource.getResourceLocator() != null) {
ResourceMethod method = childResource.getResourceLocator();
Class locatorType = method.getInvocable().getRawResponseType();
}
}
You can also use the enum ResourceMethod.JaxrsType.SUB_RESOURCE_LOCATOR to check if it equals the ResourceMethod.getType()
if (resourceMethod.getType()
.equals(ResourceMethod.JaxrsType.SUB_RESOURCE_LOCATOR) {
}
Here's what I was able to come up with, to kind of match what you got.
import com.wordnik.swagger.annotations.ApiOperation;
import org.glassfish.jersey.server.model.Resource;
import org.glassfish.jersey.server.model.ResourceMethod;
public class ApiScanner {
public static void main(String[] args) {
ApiScanner scanner = new ApiScanner();
scanner.xyz();
}
public void xyz() {
Resource resource = Resource.from(BaseResource.class);
abc(resource.getPath(), resource);
}
public void abc(String uriPrefix, Resource resource) {
for (ResourceMethod resourceMethod: resource.getResourceMethods()) {
String uri = uriPrefix;
System.out.println("-- Resource Method --");
System.out.println(resourceMethod.getHttpMethod() + "\t" + uri);
ApiOperation api = resourceMethod.getInvocable().getDefinitionMethod()
.getAnnotation(ApiOperation.class);
}
for (Resource childResource: resource.getChildResources()) {
System.out.println("-- Child Resource --");
System.out.println(childResource.getPath() + "\t" + childResource.getName());
if (childResource.getResourceLocator() != null) {
System.out.println("-- Sub-Resource Locator --");
ResourceMethod method = childResource.getResourceLocator();
Class locatorType = method.getInvocable().getRawResponseType();
System.out.println(locatorType);
Resource subResource = Resource.from(locatorType);
abc(childResource.getPath(), subResource);
}
}
}
}
OK. So I was able to get it to work almost at the same time as #peeskillet provided the answer. I will add just a different flavor of the answer in case people want to reuse the code:
import java.util.ArrayList;
import java.util.List;
import org.glassfish.jersey.server.model.Resource;
import org.glassfish.jersey.server.model.ResourceMethod;
/*
* To change this license header, choose License Headers in Project Properties.
* To change this template file, choose Tools | Templates
* and open the template in the editor.
*/
/**
*
* #author shivang
*/
public class JerseyResourceScanner {
public static void main(String[] args) {
JerseyResourceScanner runClass = new JerseyResourceScanner();
runClass.scan(BaseResource.class);
}
public void scan(Class baseClass) {
Resource resource = Resource.builder(baseClass).build();
String uriPrefix = "";
process(uriPrefix, resource);
}
private void process(String uriPrefix, Resource resource) {
String pathPrefix = uriPrefix;
List<Resource> resources = new ArrayList<>();
resources.addAll(resource.getChildResources());
if (resource.getPath() != null) {
pathPrefix = pathPrefix + resource.getPath();
}
for (ResourceMethod method : resource.getAllMethods()) {
if (method.getType().equals(ResourceMethod.JaxrsType.SUB_RESOURCE_LOCATOR)) {
resources.add(
Resource.from(resource.getResourceLocator()
.getInvocable().getDefinitionMethod().getReturnType()));
}
else {
System.out.println(method.getHttpMethod() + "\t" + pathPrefix);
}
}
for (Resource childResource : resources) {
process(pathPrefix, childResource);
}
}
}

How to get scrape using crawler4j?

I've been going at this for 4 hours now, and I simply can't see what I'm doing wrong. I have two files:
MyCrawler.java
Controller.java
MyCrawler.java
import edu.uci.ics.crawler4j.crawler.Page;
import edu.uci.ics.crawler4j.crawler.WebCrawler;
import edu.uci.ics.crawler4j.parser.HtmlParseData;
import edu.uci.ics.crawler4j.url.WebURL;
import java.util.List;
import java.util.regex.Pattern;
import org.apache.http.Header;
public class MyCrawler extends WebCrawler {
private final static Pattern FILTERS = Pattern.compile(".*(\\.(css|js|bmp|gif|jpe?g" + "|png|tiff?|mid|mp2|mp3|mp4"
+ "|wav|avi|mov|mpeg|ram|m4v|pdf" + "|rm|smil|wmv|swf|wma|zip|rar|gz))$");
/**
* You should implement this function to specify whether the given url
* should be crawled or not (based on your crawling logic).
*/
#Override
public boolean shouldVisit(WebURL url) {
String href = url.getURL().toLowerCase();
return !FILTERS.matcher(href).matches() && href.startsWith("http://www.ics.uci.edu/");
}
/**
* This function is called when a page is fetched and ready to be processed
* by your program.
*/
#Override
public void visit(Page page) {
int docid = page.getWebURL().getDocid();
String url = page.getWebURL().getURL();
String domain = page.getWebURL().getDomain();
String path = page.getWebURL().getPath();
String subDomain = page.getWebURL().getSubDomain();
String parentUrl = page.getWebURL().getParentUrl();
String anchor = page.getWebURL().getAnchor();
System.out.println("Docid: " + docid);
System.out.println("URL: " + url);
System.out.println("Domain: '" + domain + "'");
System.out.println("Sub-domain: '" + subDomain + "'");
System.out.println("Path: '" + path + "'");
System.out.println("Parent page: " + parentUrl);
System.out.println("Anchor text: " + anchor);
if (page.getParseData() instanceof HtmlParseData) {
HtmlParseData htmlParseData = (HtmlParseData) page.getParseData();
String text = htmlParseData.getText();
String html = htmlParseData.getHtml();
List<WebURL> links = htmlParseData.getOutgoingUrls();
System.out.println("Text length: " + text.length());
System.out.println("Html length: " + html.length());
System.out.println("Number of outgoing links: " + links.size());
}
Header[] responseHeaders = page.getFetchResponseHeaders();
if (responseHeaders != null) {
System.out.println("Response headers:");
for (Header header : responseHeaders) {
System.out.println("\t" + header.getName() + ": " + header.getValue());
}
}
System.out.println("=============");
}
}
Controller.java
package edu.crawler;
import edu.uci.ics.crawler4j.crawler.Page;
import edu.uci.ics.crawler4j.crawler.WebCrawler;
import edu.uci.ics.crawler4j.parser.HtmlParseData;
import edu.uci.ics.crawler4j.url.WebURL;
import java.util.List;
import java.util.regex.Pattern;
import org.apache.http.Header;
import edu.uci.ics.crawler4j.crawler.CrawlConfig;
import edu.uci.ics.crawler4j.crawler.CrawlController;
import edu.uci.ics.crawler4j.fetcher.PageFetcher;
import edu.uci.ics.crawler4j.robotstxt.RobotstxtConfig;
import edu.uci.ics.crawler4j.robotstxt.RobotstxtServer;
public class Controller
{
public static void main(String[] args) throws Exception
{
String crawlStorageFolder = "../data/";
int numberOfCrawlers = 7;
CrawlConfig config = new CrawlConfig();
config.setCrawlStorageFolder(crawlStorageFolder);
/*
* Instantiate the controller for this crawl.
*/
PageFetcher pageFetcher = new PageFetcher(config);
RobotstxtConfig robotstxtConfig = new RobotstxtConfig();
RobotstxtServer robotstxtServer = new RobotstxtServer(robotstxtConfig, pageFetcher);
CrawlController controller = new CrawlController(config, pageFetcher, robotstxtServer);
/*
* For each crawl, you need to add some seed urls. These are the first
* URLs that are fetched and then the crawler starts following links
* which are found in these pages
*/
controller.addSeed("http://www.ics.uci.edu/~welling/");
controller.addSeed("http://www.ics.uci.edu/~lopes/");
controller.addSeed("http://www.ics.uci.edu/");
/*
* Start the crawl. This is a blocking operation, meaning that your code
* will reach the line after this only when crawling is finished.
*/
controller.start(MyCrawler, numberOfCrawlers);
}
}
The Structure is as follows:
java/MyCrawler.java
java/Controller.java
jars/... --> all the jars crawler4j
I try to compile this on a WINDOWS machine using:
javac -cp "C:\xampp\htdocs\crawlcrowd\www\java\jars\*;C:\xampp\htdocs\crawlcrowd\www\java\*" MyCrawler.java
This works perfectly, and I end up with:
java/MyCrawler.class
However, when I type:
javac -cp "C:\xampp\htdocs\crawlcrowd\www\java\jars\*;C:\xampp\htdocs\crawlcrowd\www\java\*" Controller.java
it bombs out with:
Controller.java:50: error: cannot find symbol
controller.start(MyCrawler, numberOfCrawlers);
^
symbol: variable MyCrawler
location: class Controller
1 error
So, I think somehow I am not doing something that I need to be doing. Something that will make this new executable class be "aware" of the MyCrawler.class. I have tried fiddling with the classpath in the commandline javac part. I've also tried setting it in my environment variables.... no luck.
Any idea how I can get this to work?
UPDATE
I got most of this code from the Google Code page itself. But I just can't figure out what must go there. Even if I try this:
MyCrawler mc = new MyCrawler();
No luck. Somehow Controller.class does not know about MyCrawler.class.
UPDATE 2
I don't think it matters, due the problem clearly being that it can't find the class, but either way, here is the signature of "CrawlController controller". Taken from here.
/**
* Start the crawling session and wait for it to finish.
*
* #param _c
* the class that implements the logic for crawler threads
* #param numberOfCrawlers
* the number of concurrent threads that will be contributing in
* this crawling session.
*/
public <T extends WebCrawler> void start(final Class<T> _c, final int numberOfCrawlers) {
this.start(_c, numberOfCrawlers, true);
}
I am in fact passing through a "crawler" as I'm passing in "MyCrawler". The problem is that application doesn't know what MyCrawler is.
A couple of things come to mind:
Is your MyCrawler extending edu.uci.ics.crawler4j.crawler.WebCrawler?
public class MyCrawler extends WebCrawler
Are you passing in MyCrawler.class (i.e., as a class) into controller.start?
controller.start(MyCrawler.class, numberOfCrawlers);
Both of these need to be satisfied in order for the controller to compile and run. Also, Crawler4j has some great examples here:
https://code.google.com/p/crawler4j/source/browse/src/test/java/edu/uci/ics/crawler4j/examples/basic/BasicCrawler.java
https://code.google.com/p/crawler4j/source/browse/src/test/java/edu/uci/ics/crawler4j/examples/basic/BasicCrawlController.java
These 2 classes will compile and run right away (i.e., BasicCrawlController), so it's a good starting place if you are running into any issues.
The parameters for start() should be a class and number of crawlers. Its throwing an error as you are passing in an object of crawler and not the crawler class. Use the start method as shown below, it should work
controller.start(MyCrawler.class, numberOfCrawlers)
Here you are passing a class name MyCrawler as a parameter.
controller.start(MyCrawler, numberOfCrawlers);
I think class name should not be a parameter.
I am also working little bit on Crawling!

Java DNS cache viewer

Is there a way to view/dump DNS cached used by java.net api?
Here is a script to print the positive and negative DNS address cache.
import java.lang.reflect.Field;
import java.net.InetAddress;
import java.net.UnknownHostException;
import java.util.ArrayList;
import java.util.Date;
import java.util.List;
import java.util.Map;
public class DNSCache {
public static void main(String[] args) throws Exception {
InetAddress.getByName("stackoverflow.com");
InetAddress.getByName("www.google.com");
InetAddress.getByName("www.yahoo.com");
InetAddress.getByName("www.example.com");
try {
InetAddress.getByName("nowhere.example.com");
} catch (UnknownHostException e) {
}
String addressCache = "addressCache";
System.out.println(addressCache);
printDNSCache(addressCache);
String negativeCache = "negativeCache";
System.out.println(negativeCache);
printDNSCache(negativeCache);
}
private static void printDNSCache(String cacheName) throws Exception {
Class<InetAddress> klass = InetAddress.class;
Field acf = klass.getDeclaredField(cacheName);
acf.setAccessible(true);
Object addressCache = acf.get(null);
Class cacheKlass = addressCache.getClass();
Field cf = cacheKlass.getDeclaredField("cache");
cf.setAccessible(true);
Map<String, Object> cache = (Map<String, Object>) cf.get(addressCache);
for (Map.Entry<String, Object> hi : cache.entrySet()) {
Object cacheEntry = hi.getValue();
Class cacheEntryKlass = cacheEntry.getClass();
Field expf = cacheEntryKlass.getDeclaredField("expiration");
expf.setAccessible(true);
long expires = (Long) expf.get(cacheEntry);
Field af = cacheEntryKlass.getDeclaredField("address");
af.setAccessible(true);
InetAddress[] addresses = (InetAddress[]) af.get(cacheEntry);
List<String> ads = new ArrayList<String>(addresses.length);
for (InetAddress address : addresses) {
ads.add(address.getHostAddress());
}
System.out.println(hi.getKey() + " "+new Date(expires) +" " +ads);
}
}
}
The java.net.InetAddress uses caching of successful and unsuccessful host name resolutions.
From its javadoc:
The InetAddress class has a cache to
store successful as well as
unsuccessful host name resolutions.
By default, when a security manager is
installed, in order to protect against
DNS spoofing attacks, the result of
positive host name resolutions are
cached forever. When a security
manager is not installed, the default
behavior is to cache entries for a
finite (implementation dependent)
period of time. The result of
unsuccessful host name resolution is
cached for a very short period of time
(10 seconds) to improve performance.
If the default behavior is not
desired, then a Java security property
can be set to a different Time-to-live
(TTL) value for positive caching.
Likewise, a system admin can configure
a different negative caching TTL value
when needed.
Two Java security properties control
the TTL values used for positive and
negative host name resolution caching:
networkaddress.cache.ttl
Indicates the caching policy for
successful name lookups from the name
service. The value is specified as as
integer to indicate the number of
seconds to cache the successful
lookup. The default setting is to
cache for an implementation specific
period of time.
A value of -1 indicates "cache
forever".
networkaddress.cache.negative.ttl (default: 10)
Indicates the caching
policy for un-successful name lookups
from the name service. The value is
specified as as integer to indicate
the number of seconds to cache the
failure for un-successful lookups.
A value of 0 indicates "never cache".
A value of -1 indicates "cache
forever".
If what you have in mind is dumping the caches (of type java.net.InetAddress$Cache) used by java.net.InetAddress , they are internal implementation details and thus private:
/*
* Cached addresses - our own litle nis, not!
*/
private static Cache addressCache = new Cache(Cache.Type.Positive);
private static Cache negativeCache = new Cache(Cache.Type.Negative);
So I doubt you'll find anything doing this out of the box and guess that you'll have to play with reflection to achieve your goal.
Above answer does not work in Java 8 anymore.
Here a slight adaption:
import java.lang.reflect.Field;
import java.net.InetAddress;
import java.net.UnknownHostException;
import java.time.Instant;
import java.time.temporal.ChronoUnit;
import java.util.ArrayList;
import java.util.Date;
import java.util.List;
import java.util.Map;
public class DNSCache {
public static void main(String[] args) throws Exception {
InetAddress.getByName("stackoverflow.com");
InetAddress.getByName("www.google.com");
InetAddress.getByName("www.yahoo.com");
InetAddress.getByName("www.example.com");
try {
InetAddress.getByName("nowhere.example.com");
} catch (UnknownHostException e) {
}
String addressCache = "addressCache";
System.out.println(addressCache);
printDNSCache(addressCache);
String negativeCache = "negativeCache";
System.out.println(negativeCache);
printDNSCache(negativeCache);
}
private static void printDNSCache(String cacheName) throws Exception {
Class<InetAddress> klass = InetAddress.class;
Field acf = klass.getDeclaredField(cacheName);
acf.setAccessible(true);
Object addressCache = acf.get(null);
Class cacheKlass = addressCache.getClass();
Field cf = cacheKlass.getDeclaredField("cache");
cf.setAccessible(true);
Map<String, Object> cache = (Map<String, Object>) cf.get(addressCache);
for (Map.Entry<String, Object> hi : cache.entrySet()) {
Object cacheEntry = hi.getValue();
Class cacheEntryKlass = cacheEntry.getClass();
Field expf = cacheEntryKlass.getDeclaredField("expiration");
expf.setAccessible(true);
long expires = (Long) expf.get(cacheEntry);
Field af = cacheEntryKlass.getDeclaredField("addresses");
af.setAccessible(true);
InetAddress[] addresses = (InetAddress[]) af.get(cacheEntry);
List<String> ads = new ArrayList<String>(addresses.length);
for (InetAddress address : addresses) {
ads.add(address.getHostAddress());
}
System.out.println(hi.getKey() + " expires in "
+ Instant.now().until(Instant.ofEpochMilli(expires), ChronoUnit.SECONDS) + " seconds " + ads);
}
}
}
The above answer does not work with Java 11. In Java 11, both positive and negative cache entries can be retrieved using the 'cache' instance variable.
Here are new adaptations:
import java.net.InetAddress;
import java.net.UnknownHostException;
import java.lang.reflect.Field;
import java.net.InetAddress;
import java.net.UnknownHostException;
import java.util.ArrayList;
import java.util.Date;
import java.util.List;
import java.util.Map;
import java.util.concurrent.ConcurrentHashMap;
import java.time.Instant;
import java.time.temporal.ChronoUnit;
public class DnsCacheFetcher {
static long startTimeinNano = System.nanoTime();
public static void main(String[] args) throws Exception {
System.out.println("SecurityManager: " + System.getSecurityManager());
InetAddress.getByName("stackoverflow.com");
InetAddress.getByName("www.google.com");
InetAddress.getByName("www.yahoo.com");
InetAddress.getByName("www.ankit.com");
try {
InetAddress.getByName("nowhere.example.com");
} catch (UnknownHostException e) {
System.out.println("Unknown host: " + e);
}
String addressCache = "cache";
System.out.println(">>>>" + addressCache);
printDNSCache(addressCache);
/*
* String negativeCache = "negativeCache"; System.out.println(">>>>" +
* negativeCache); printDNSCache(negativeCache);
*/
}
private static void printDNSCache(String cacheName) throws Exception {
Class<InetAddress> klass = InetAddress.class;
Field[] fields = klass.getDeclaredFields();
/*
* for (Field field : fields) { System.out.println(field.getName()); }
*/
Field acf = klass.getDeclaredField(cacheName);
acf.setAccessible(true);
Object addressCache = acf.get(null);
Class cacheKlass = addressCache.getClass();
Map<String, Object> cache = (Map<String, Object>) acf.get(addressCache);
for (Map.Entry<String, Object> hi : cache.entrySet()) {
/* System.out.println("Fetching cache for: " + hi.getKey()); */
Object cacheEntry = hi.getValue();
Class cacheEntryKlass = cacheEntry.getClass();
Field expf = cacheEntryKlass.getDeclaredField("expiryTime");
expf.setAccessible(true);
long expires = (Long) expf.get(cacheEntry);
Field af = cacheEntryKlass.getDeclaredField("inetAddresses");
af.setAccessible(true);
InetAddress[] addresses = (InetAddress[]) af.get(cacheEntry);
List<String> ads = null;
if (addresses != null) {
ads = new ArrayList<String>(addresses.length);
for (InetAddress address : addresses) {
ads.add(address.getHostAddress());
}
}
/*
* System.out.println(hi.getKey() + " expires in " +
* (Instant.now().until(Instant.ofEpochMilli(expires), ChronoUnit.SECONDS)) +
* " seconds. inetAddresses: " + ads);
*/
/*
* System.nanoTime() + 1000_000_000L * cachePolicy : this how java 11 set
* expiryTime
*/
System.out.println(hi.getKey() + " expires in approx " + (expires - startTimeinNano) / 1000_000_000L
+ " seconds. inetAddresses: " + ads);
}
}}
https://github.com/alibaba/java-dns-cache-manipulator
A simple 0-dependency thread-safe Java™ lib for setting/viewing dns programmatically without touching host file, make unit/integration test portable; and a tool for setting/viewing dns of running JVM process.
This lib/tool read and set java dns cache by reflection, with concerns:
compatibility with different java version(support Java 6/8/11/17).
dns cache implementation in java.net.InetAddress is different in different java version.
thread-safety
support IPv6

Categories

Resources