public void Test {
String location = xyz; //location of the class file for the stub
ActivationDesc desc = new ActivationDesc( "TestObjectImpl", location, null);
ActivationID id = ActivationGroup.getSystem().registerObject(desc);
TestObject to = (TestObject)id.activate(true);
}
On running the above code, I get ClassNotFoundException for TestObjectImpl_stub . I looked around the web and found two possible workarounds:
Specify the path of the class files in CLASSPATH variable.
While executing the test, mention java.rmi.server.codebase=location along with the java command, location being the location of the class files.
Which of the above method is more appropriate? Is there a better solution for this?
Related
I have a Java program that is able to change the wallpaper taking in input an image using WINAPI.
Everything works fine when I run it inside Eclipse IDE, but when I run the JAR I got the error:
Caused by: java.lang.IllegalArgumentException: URI is not hierarchical
public class Main {
//INIT USER32 for WINAPI
public static interface User32 extends Library {
User32 INSTANCE = (User32) Native.loadLibrary("user32",User32.class,W32APIOptions.DEFAULT_OPTIONS);
boolean SystemParametersInfo (int one, int two, String s ,int three);
}
public static void main(String[] args) throws IOException, URISyntaxException {
//Change wallpaper
System.out.println("Change wallpaper");
URL url = Main.class.getResource("/resources/img.jpg");
File f = new File(url.toURI());
String path = f.getPath();
User32.INSTANCE.SystemParametersInfo(0x0014, 0, path , 1);
}
}
The image is shipped within the JAR, so maybe the error is related to this since the program is not able to correctly read to URL inside the JAR.
Is there a way to solve this?
A jar file is just a compressed file when the resource is bundled as a jar java will be treated as a single file, which means it will not access to your resources.
try using this instead getResourceAsStream(...);
In a multi-module project I want to be sure that Spring's #sql annotation uses correct resources. Is there a way to log full path of those files to console somehow?
Spring does log script file name before execution, but in tests for different modules those file names are the same sometimes.
SqlScriptsTestExecutionListener - responsible for the processing of #Sql, for the first step you can change to debug related log by adding property logging.level.org.springframework.test.context.jdbc=debug, but the debug message is not fully and if is not enough you should create your own TestExecutionListener and declare on test class #TestExecutionListeners(listeners = SqlScriptsCustomTestExecutionListener.class)
for example:
public class SqlScriptsCustomTestExecutionListener extends AbstractTestExecutionListener {
#Override
public void beforeTestMethod(TestContext testContext) {
List<Resource> scriptResources = new ArrayList<>();
Set<Sql> sqlAnnotations = AnnotatedElementUtils.getMergedRepeatableAnnotations(testContext.getTestMethod(), Sql.class);
for (Sql sqlAnnotation : sqlAnnotations) {
String[] scripts = sqlAnnotation.scripts();
scripts = TestContextResourceUtils.convertToClasspathResourcePaths(testContext.getTestClass(), scripts);
scriptResources.addAll(TestContextResourceUtils.convertToResourceList(testContext.getApplicationContext(), scripts));
}
if (!scriptResources.isEmpty()) {
String debugString = scriptResources.stream().map(r -> {
try {
return r.getFile().getAbsolutePath();
} catch (IOException e) {
System.out.println("Unable to found file resource");
}
return null;
}).collect(Collectors.joining(","));
System.out.println(String.format("Execute sql script :[%s]", debugString));
}
}
It is just quick example and it works. Most of source code i copied from SqlScriptsTestExecutionListener just for explanation. It is just realization in case of #Sql annotation on method level, and not included class level.
I hope it will be helps you.
I'm writing a UDF for Pig using Java. It works fine but Pig doesn't give me options to separate environment. What my Pig script is doing is to get Geo location from IP address.
Here's my code on the Geo location part.
private static final String GEO_DB = "GeoLite2-City.mmdb";
private static final String GEO_FILE = "/geo/" + GEO_DB;
public Map<String, Object> geoData(String ipStr) {
Map<String, Object> geoMap = new HashMap<String, Object>();
DatabaseReader reader = new DatabaseReader.Builder(new File(GEO_DB)).build();
// other stuff
}
GeoLite2-City.mmdb exists in HDFS that's why I can refer from absolute path using /geo/GeoLite2-City.mmdb.
However, I can't do that from my JUnit test or I have to create /geo/GeoLite2-City.mmdb on my local machine and Jenkins which is not ideal. I'm trying to figure out a way to make my test passed while using new File(GEO_DB) and not
getClass().getResourceAsStream('./geo/GeoLite2-City.mmdb') because
getClass().getResourceAsStream('./geo/GeoLite2-City.mmdb')
Doesn't work in Hadoop.
And if I run Junit test it would fail because I don't have /geo/GeoLite2-City.mmdb on my local machine.
Is there anyway I can overcome this? I just want my tests to pass without changing the code to be using getClass().getResourceAsStream and I can't if/else around that because Pig doesn't give me a way to pass in parameter or maybe I'm missing something.
And this is my JUnit test
#Test
#Ignore
public void shouldGetGeoData() throws Exception {
String ipTest = "128.101.101.101";
Map<String, Object> geoJson = new LogLine2Json().geoData(ipTest);
assertThat(geoJson.get("lLa").toString(), is(equalTo("44.9759")));
assertThat(geoJson.get("lLo").toString(), is(equalTo("-93.2166")));
}
which it works if I read the database file from resource folder. That's why I have #Ignore
Besides, your whole code looks pretty un-testable.
Every time when you directly call new in your production code, you prevent dependency injection; and thereby you make it much harder to test your code.
The point is to not call new File() within your production code.
Instead, you could use a factory that gives you a "ready to use" DatabaseReader object. Then you can test your factory to do the right thing; and you can mock that factory when testing this code (to return a mocked database reader).
So, that one file instance is just the top of your "testing problems" here.
Honestly: don't write production code first. Do TDD: write test cases first; and you will quickly learn that such production code that you are presenting here is really hard to test. And when you apply TDD, you start from "test perspective", and you will create production code that is really testable.
You have to make the file location configurable. E.g. inject it via constructor. E.g. you could create a non-default constructor for testing only.
public class LogLine2Json {
private static final String DEFAULT_GEO_DB = "GeoLite2-City.mmdb";
private static final String DEFAULT_GEO_FILE = "/geo/" + GEO_DB;
private final String geoFile;
public LogLine2Json() {
this(DEFAULT_GEO_FILE);
}
LogLine2Json(String geoFile) {
this.geoFile = geoFile;
}
public Map<String, Object> geoData(String ipStr) {
Map<String, Object> geoMap = new HashMap<String, Object>();
File file = new File(geoFile);
DatabaseReader reader = new DatabaseReader.Builder(file).build();
// other stuff
}
}
Now you can create a file from the resource and use this file in your test.
public class LogLine2JsonTest {
#Rule
public final TemporaryFolder folder = new TemporaryFolder();
#Test
public void shouldGetGeoData() throws Exception {
File dbFile = copyResourceToFile("/geo/GeoLite2-City.mmdb");
String ipTest = "128.101.101.101";
LogLine2Json logLine2Json = new LogLine2Json(dbFile.getAbsolutePath())
Map<String, Object> geoJson = logLine2Json.geoData(ipTest);
assertThat(geoJson.get("lLa").toString(), is(equalTo("44.9759")));
assertThat(geoJson.get("lLo").toString(), is(equalTo("-93.2166")));
}
private File copyResourceToFile(String name) throws IOException {
InputStream resource = getClass().getResourceAsStream(name);
File file = folder.newFile();
Files.copy(resource, file.toPath(), StandardCopyOption.REPLACE_EXISTING);
return file;
}
}
TemporaryFolder is a JUnit rule that deletes every file that is created during test afterwards.
You may modify the asserts by using the hasToString matcher. This will give you more detailed information in case of a failing test. (And you have to read/write less code.)
assertThat(geoJson.get("lLa"), hasToString("44.9759"));
assertThat(geoJson.get("lLo"), hasToString("-93.2166"));
You don't. Your question embodies a contradiction in terms. Resources are not files and do not live in the file system. You can either distribute the file separately from the JAR and use it as a File or include it in the JAR and use it as a resource. Not both. You have to make up your mind.
I am using liquibase (3.1.1) in a spring environment (3.2.x) and load the changesets via the inlcudeAll tag in a master file. There I use the "classpath*:/package/to/changesets" as path.
<?xml version="1.0" encoding="UTF-8"?>
<databaseChangeLog xmlns="http://www.liquibase.org/xml/ns/dbchangelog"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.liquibase.org/xml/ns/dbchangelog
http://www.liquibase.org/xml/ns/dbchangelog/dbchangelog-3.1.xsd">
<includeAll path="classpath*:/package/to/changesets"/>...
I use a naming strategy like "nnn_changesetname.xml" to keep ordering. But when I look into the changeset table this ordering via the filenames are not kept. Is this only working, if the changeset files are contained in a directory and not on the classpath?
Update
Hi, I found out that the below suggested solution is not enough. I think it lies in the implementation how liquibase resolves the includAll attribute. In my case it first resolves all "folders" and then looks into each folder for changeset xmls. This will break the ordering of the xml files in all classpath*:/changes locations, because there are now several "changes" folders in different locations. What I would suspect in such a case is a merge of all contents of this "virtual" classpath folders and loading of all resources in one enumeration. Or we could allow some resouce pattern in the inlcudeAll tag like resources="classpath*:/changes/*.xml" to directly select all needed files (tried it out with the path attribute, but did not work, because it checks for a folder)?
Update
I made a hack to check if the ordering in the returned enumeration is preserved with the anwser from below. To achive this I checked for the given package name and if it matches my pattern I added an additional "*.xml" to it. With this extension I get all changeset as needed.
#Override
public Enumeration<URL> getResources(String packageName)
throws IOException {
if(packageName.equals("classpath*:/plugin/liquibase/changes/")) {
packageName = packageName + "*.xml";
}
List<URL> resources = Collections.list(super.getResources(packageName));
Collections.sort(resources, new Comparator<URL>() {
#Override
public int compare(URL url1, URL url2) {
String path1 = FilenameUtils.getName(url1.getPath());
String path2 = FilenameUtils.getName(url2.getPath());
return String.CASE_INSENSITIVE_ORDER.compare(path1, path2);
}
});
logger.info("Found resources: {}", resources);
return Collections.enumeration(resources);
}};
In the log I can see now that the resources have the correct order. But when I look into the table DATABASECHANGELOCK it does not reflect the order I had in the enumeration. So it seems that this values get reodered somewhere else.
Update
Analyzed the code furhter and found out that the class liquibase.parser.core.xml.XMLChangeLogSAXHandler makes a reordering of the returned enumeration. So my changes will have no effect. I do not think that I can hack into this class as well.
You are right, Liquibase is relying on the underlying "list files" logic which orders files alphabetically through the file system but apparently does not through classpaths.
I created https://liquibase.jira.com/browse/CORE-1843 to track the fix.
For now, if you configure spring with a subclass of liquibase.integration.spring.SpringLiquibase that overrides getResources(String packageName) with a method that sorts the returned Enumeration that should resolve the problem for you.
So after some thinking and one night of sleep I came up with the following hack to guarantee order of the loaded changelog files via classpath pattern classpath*:/my/path/to/changelog/*.xml . The idea is to create the main changelog file on the fly via dom manipulation, when liquibase requests it.
It only works for the main changelog file. Following prerequisite:
The pattern can only be used for the main changelog file
I use an empty master changelog file as template
All other changelog files have to use the normal allowed loading mechanism
Works only in an Spring environment
First I had to extend/overwrite the liquibase.integration.spring.SpringLiquibase with my implementation.
public class MySpringLiquibase extends SpringLiquibase {
private static final Logger logger = LoggerFactory.getLogger(MySpringLiquibase.class);
private ApplicationContext context;
private String changeLogLocationPattern;
private List<String> changeLogLocations;
#Autowired
public void setContext(ApplicationContext context) {
this.context = context;
}
/**
* Location pattern to search for changelog files.
*
* #param changeLogLocationPattern
*/
public void setChangeLogLocationPattern(String changeLogLocationPattern) {
this.changeLogLocationPattern = changeLogLocationPattern;
}
#Override
public void afterPropertiesSet() throws LiquibaseException {
try {
changeLogLocations = new ArrayList<String>();
// retrieve all changelog resources for the pattern
List<Resource> changeLogResources = Arrays.asList(context.getResources(changeLogLocationPattern));
for (Resource changeLogResource : changeLogResources) {
// get only the classpath path of the resource
String changeLogLocation = changeLogResource.getURL().getPath();
changeLogLocation = "classpath:" + StringUtils.substringAfterLast(changeLogLocation, "!");
changeLogLocations.add(changeLogLocation);
}
// sort all found resources by string
Collections.sort(changeLogLocations, String.CASE_INSENSITIVE_ORDER);
} catch (IOException e) {
throw new LiquibaseException("Could not resolve changeLogLocationPattern", e);
}
super.afterPropertiesSet();
}
#Override
protected SpringResourceOpener createResourceOpener() {
final String mainChangeLog = getChangeLog();
return new SpringResourceOpener(getChangeLog()) {
#Override
public InputStream getResourceAsStream(String file)
throws IOException {
// check if main changelog file
if(mainChangeLog.equals(file)) {
// load master template and convert to dom object
Resource masterResource = getResourceLoader().getResource(file);
Document masterDocument = DomUtils.parse(masterResource, true);
// add all changelog locations as include elements
for (String changeLogLocation : changeLogLocations) {
Element inlcudeElement = masterDocument.createElement("include");
inlcudeElement.setAttribute("file", changeLogLocation);
masterDocument.getDocumentElement().appendChild(inlcudeElement);
}
if(logger.isDebugEnabled()) {
logger.debug("Master changeset: {}", DomUtils.toString(masterDocument));
}
// convert dom back to string and give it back as input resource
return new ByteArrayInputStream(DomUtils.toBytes(masterDocument));
} else {
return super.getResourceAsStream(file);
}
}
};
}
}
This class now needs to be used in the spring xml configuration.
<bean id="liquibase" class="liquibase.integration.spring.MySpringLiquibase"
p:changeLog="classpath:/plugin/liquibase/master.xml"
p:dataSource-ref="dataSource"
p:contexts="${liquibase.contexts:prod}"
p:ignoreClasspathPrefix="true"
p:changeLogLocationPattern="classpath*:/plugin/liquibase/changes/*.xml"/>
With this changes I have achieved that my main changelog files are ordered by their name.
Hope that helps others too.
I am writing an application that will load Java scripts. I currently have a GUI which utilizes a JFileChooser to allow the user to select a script from their machine. The script file can be anywhere. It is not on the classpath. Having only a File object to represent that script file, how can I obtain a Class representation of it?
I know that to load a class you need its binary name, so in.this.format. However, the problem with that is I don't know how the script writer may have packaged it. For example, he/she may have, while developing it, put the script file in the package foo.bar. After I download this script and place it in my documents (i.e., not in foo/bar), I can't load the script without knowing that it was packaged in foo.bar. If the class name is Test and I try to create a URLClassLoader pointing to the script file by doing new URLClassLoader(new URL[] { new URL(scriptFile.toURI().toURL()) }) and I do classLoader.loadClass("Test") I will get an exception saying that the class had the wrong name, and the correct name is foo.bar.Test. But how am I supposed to know that ahead of time?
This is what I have right now:
public class ScriptClassLoader extends URLClassLoader {
private final File script;
public ScriptClassLoader(File script) throws MalformedURLException {
super(new URL[] { script.toURI().toURL() });
this.script = script;
}
public Class<?> load() throws ClassNotFoundException {
String fileName = script.getName();
String className = fileName.substring(0, fileName.indexOf(".class"));
return loadClass(className);
}
}
How do people load scripts at runtime that are not part of the program's classpath, and the binary name of the class is not known?
If you just need to load a class from a given .class file, no matter how this classes is named, you can load the data yourself and then call ClassLoader's defineClass() method:
RandomAccessFile raf = new RandomAccessFile(script, "r");
try {
byte[] classData = new byte[(int) raf.length()];
raf.readFully(classData);
return super.defineClass(null, classData, 0, classData.length);
} finally {
raf.close();
}