Functional tests (Jellytools) don't start on netbeans platform - java

I'm trying to add some functional tests on existing netbeans application.
Info about application: packaged by maven, used netbeans platform 7.3.1.
I've added dependencies how described in this article but got exception:
Running qa.FuncTest
Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.067 sec <<< FAILURE! - in qa.FuncTest
org.netbeans.junit.NbModuleSuite$S#67ad77a7(org.netbeans.junit.NbModuleSuite$S) Time elapsed: 0.066 sec <<< ERROR!
java.lang.ClassNotFoundException: org.netbeans.Main
at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
at org.netbeans.junit.NbModuleSuite$S.runInRuntimeContainer(NbModuleSuite.java:819)
at org.netbeans.junit.NbModuleSuite$S.access$100(NbModuleSuite.java:667)
Does anybody know why it happend? And how to fix it?
Thanks in advance.
UPD dependency section from application/pom.xml
<dependencies>
<dependency>
<groupId>org.netbeans.cluster</groupId>
<artifactId>platform</artifactId>
<version>${software.netbeans.version}</version>
<type>pom</type>
</dependency>
<dependency>
<groupId>org.netbeans.api</groupId>
<artifactId>org-jdesktop-beansbinding</artifactId>
<version>${software.netbeans.version}</version>
</dependency>
<dependency>
<groupId>org.netbeans.api</groupId>
<artifactId>org-netbeans-modules-nbjunit</artifactId>
<version>${software.netbeans.version}</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.netbeans.api</groupId>
<artifactId>org-netbeans-modules-jellytools-platform</artifactId>
<version>${software.netbeans.version}</version>
<scope>test</scope>
</dependency>
</dependencies>
UPD1 test class:
package qa;
import junit.framework.Test;
import org.netbeans.jellytools.JellyTestCase;
import org.netbeans.jellytools.OptionsOperator;
import org.netbeans.junit.NbModuleSuite;
import org.openide.windows.TopComponent;
public class FuncTest extends JellyTestCase {
public static Test suite() {
return NbModuleSuite.allModules(FuncTest.class);
}
public FuncTest(String n) {
super(n);
}
public void testWhatever() throws Exception {
TopComponent tc = new TopComponent();
tc.setName("label");
tc.open();
OptionsOperator.invoke().selectMiscellaneous();
Thread.sleep(5000);
System.err.println("OK.");
}
}

I would like to share results of my investigation. I noticed when application was started as usual i saw in output window:
Installation =.../application/target/application/extra
.../application/target/application/java
.../application/target/application/kws
.../application/target/application/platform
but when application was started via nbjubit/jellytool i saw only:
Installation =.../application/target/application/platform
so i decided to expand this values and investigated source code. Let's consider a few interesting methods in NbModuleSuite.java :
private static String[] tokenizePath(String path) {
List<String> l = new ArrayList<String>();
StringTokenizer tok = new StringTokenizer(path, ":;", true); // NOI18N
.....
}
static File findPlatform() {
String clusterPath = System.getProperty("cluster.path.final"); // NOI18N
if (clusterPath != null) {
for (String piece : tokenizePath(clusterPath)) {
File d = new File(piece);
if (d.getName().matches("platform\\d*")) {
return d;
}
}
}
String allClusters = System.getProperty("all.clusters"); // #194794
if (allClusters != null) {
File d = new File(allClusters, "platform"); // do not bother with old numbered variants
if (d.isDirectory()) {
return d;
}
}
....
}
static void findClusters(Collection<File> clusters, List<String> regExps) throws IOException {
File plat = findPlatform().getCanonicalFile();
String selectiveClusters = System.getProperty("cluster.path.final"); // NOI18N
Set<File> path;
if (selectiveClusters != null) {
path = new TreeSet<File>();
for (String p : tokenizePath(selectiveClusters)) {
File f = new File(p);
path.add(f.getCanonicalFile());
}
} else {
File parent;
String allClusters = System.getProperty("all.clusters"); // #194794
if (allClusters != null) {
parent = new File(allClusters);
} else {
parent = plat.getParentFile();
}
path = new TreeSet<File>(Arrays.asList(parent.listFiles()));
}
....
}
As you can see we can set path values in cluster.path.final or all.clusters and use ; : as delimeters. I spent some time to play with this constants and realised that settings didn't set up in pom.xml:
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-surefire-plugin</artifactId>
<version>${software.maven-surefire-plugin}</version>
<configuration>
<skipTests>false</skipTests>
<systemPropertyVariables>
<branding.token>${brandingToken}</branding.token>
<!--problem part start-->
<property>
<name>cluster.path.final</name>
<value>${project.build.directory}/${brandingToken}/platform:${project.build.directory}/${brandingToken}/java:...etc</value>
</property>
<!--problem part end-->
</systemPropertyVariables>
</configuration>
</plugin>
but this work well:
<properties>
<cluster.path.final>${project.build.directory}/${brandingToken}/platform:${project.build.directory}/${brandingToken}/java:...etc</cluster.path.final>
</properties>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-surefire-plugin</artifactId>
<version>${software.maven-surefire-plugin}</version>
<configuration>
<skipTests>false</skipTests>
<systemPropertyVariables>
<branding.token>${brandingToken}</branding.token>
<cluster.path.final>${cluster.path.final}</cluster.path.final>
</systemPropertyVariables>
</configuration>
</plugin>
I don't know why it happens but I would recommend to use maven section properties to set systemPropertyVariables of maven-surefire-plugin. Good luck!

Related

Java Melody does not show any sql data on xampp tomcat8

On Xampp Tomcat on Windows 11, I am trying to monitor java-web-app with java melody.
However, sql data is not detected by java melody.
Could you figure out what i am missing?
I have created a library project, not to do same settings on every app
Here is the projects code...
pom.xml:
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>com.tugalsan</groupId>
<artifactId>api-profile</artifactId>
<version>1.0-SNAPSHOT</version>
<dependencies>
<dependency>
<groupId>javax.servlet</groupId>
<artifactId>javax.servlet-api</artifactId>
<version>4.0.1</version>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>net.bull.javamelody</groupId>
<artifactId>javamelody-core</artifactId>
<version>1.90.0</version>
</dependency>
<dependency>
<groupId>${project.groupId}</groupId>
<artifactId>api-url</artifactId>
<version>${project.version}</version>
</dependency>
</dependencies>
<build>
<resources>
<resource>
<directory>src/main/java</directory>
<includes>
<include>**/*.java</include>
<include>**/*.gwt.xml</include>
</includes>
</resource>
<resource>
<directory>src/main/resources</directory>
<includes>
<include>**/*.*</include>
</includes>
</resource>
</resources>
<pluginManagement>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-source-plugin</artifactId>
<version>3.2.1</version>
<executions>
<execution>
<id>attach-sources</id>
<phase>package</phase>
<goals>
<goal>jar-no-fork</goal>
</goals>
</execution>
</executions>
</plugin>
</plugins>
</pluginManagement>
</build>
<properties>
<maven.compiler.source>15</maven.compiler.source>
<maven.compiler.target>15</maven.compiler.target>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
</properties>
</project>
TGS_ProfileServletUtils.java:
package com.tugalsan.api.profile.client;
import com.tugalsan.api.url.client.parser.*;
public class TGS_ProfileServletUtils {
final public static String SERVLET_NAME = "monitoring";//HARD-CODED IN LIB, THIS CANNOT BE CHANGED!
}
TS_ProfileMelodyUtils.java:
package com.tugalsan.api.profile.server.melody;
import java.sql.*;
import javax.sql.*;
import javax.servlet.*;
import javax.servlet.annotation.*;
import net.bull.javamelody.*;
import com.tugalsan.api.profile.client.*;
public class TS_ProfileMelodyUtils {
#WebFilter(
filterName = TGS_ProfileServletUtils.SERVLET_NAME,
dispatcherTypes = {DispatcherType.REQUEST, DispatcherType.ASYNC},
asyncSupported = true,
urlPatterns = {"/*"},
initParams = {
#WebInitParam(name = "async-supported", value = "true")
}
)
final public static class MelodyFilter extends MonitoringFilter {
}
#WebListener
final public static class MelodyListener extends SessionListener {
}
public static Connection createProxy(Connection con) {
try {
DriverManager.registerDriver(new net.bull.javamelody.JdbcDriver());
return JdbcWrapper.SINGLETON.createConnectionProxy(con);
} catch (Exception e) {
e.printStackTrace();
return null;
}
}
public static DataSource createProxy(DataSource ds) {
try {
DriverManager.registerDriver(new net.bull.javamelody.JdbcDriver());
return JdbcWrapper.SINGLETON.createDataSourceProxy(ds);
} catch (Exception e) {
e.printStackTrace();
return null;
}
}
}
a helper class
package com.tugalsan.api.sql.conn.server;
import java.io.Serializable;
import java.util.Objects;
public class TS_SQLConnConfig implements Serializable {
public int method = TS_SQLConnMethodUtils.METHOD_MYSQL();
public String dbName;
public String dbIp = "localhost";
public int dbPort = 3306;
public String dbUser = "root";
public String dbPassword = "";
public boolean autoReconnect = true;
public boolean useSSL = false;
public boolean region_ist = true;
public boolean charsetUTF8 = true;
public boolean isPooled = true;
public TS_SQLConnConfig() {//DTO
}
public TS_SQLConnConfig(CharSequence dbName) {
this.dbName = dbName == null ? null : dbName.toString();
}
}
On another api, this is how i create a pool
(I skipped some class files, unrelated to the question)
public static PoolProperties create(TS_SQLConnConfig config) {
var pool = new PoolProperties();
pool.setUrl(TS_SQLConnURLUtils.create(config));
pool.setDriverClassName(TS_SQLConnMethodUtils.getDriver(config));
if (TGS_StringUtils.isPresent(config.dbUser) && TGS_StringUtils.isPresent(config.dbPassword)) {
pool.setUsername(config.dbUser);
pool.setPassword(config.dbPassword);
}
var maxActive = 200;
pool.setMaxActive(maxActive);
pool.setInitialSize(maxActive / 10);
pool.setJmxEnabled(true);
pool.setTestWhileIdle(true);
pool.setTestOnBorrow(true);
pool.setTestOnReturn(false);
pool.setValidationQuery("SELECT 1");
pool.setValidationInterval(30000);
pool.setTimeBetweenEvictionRunsMillis(30000);
pool.setMaxWait(10000);
pool.setMinEvictableIdleTimeMillis(30000);
pool.setMinIdle(10);
pool.setFairQueue(true);
pool.setLogAbandoned(true);
pool.setRemoveAbandonedTimeout(600);
pool.setRemoveAbandoned(true);
pool.setJdbcInterceptors(
"org.apache.tomcat.jdbc.pool.interceptor.ConnectionState;"
+ "org.apache.tomcat.jdbc.pool.interceptor.StatementFinalizer;"
+ "org.apache.tomcat.jdbc.pool.interceptor.ResetAbandonedTimer");
return pool;
}
WAY1:
//I created datasource once, save it as a global variable inside a ConcurrentLinkedQueue.
var pool_ds = new DataSource(create(config));
//then for every connection need, i created an extra proxy like this.
var pool_con = pool_ds.getConnection();
var proxy_con = TS_ProfileMelodyUtils.createProxy(pool_con);
//and close both of them later on
WAY1 RESULT:
WAY2:
//I created datasource once, save it as a global variable inside a ConcurrentLinkedQueue.
var pool_ds = new DataSource(create(config));
//then i created a proxy datasource, save it as a global variable too
var dsProxy = TS_ProfileMelodyUtils.createProxy(ds);
//then for every connection need, i did not create a proxy connection.
var pool_con = pool_ds.getConnection();
//and close connection later on
WAY2 RESULT: (same, nothing changed)
WAY3:
//I created datasource once, save it as a global variable inside a ConcurrentLinkedQueue.
var pool_ds = new DataSource(create(config));
//then i created a proxy datasource, save it as a global variable too
var dsProxy = TS_ProfileMelodyUtils.createProxy(ds);
//then for every connection need, i created an extra proxy like this.
var pool_con = pool_ds.getConnection();
var proxy_con = TS_ProfileMelodyUtils.createProxy(pool_con);
//and close both of them later on
WAY3 RESULT: (same, nothing changed)
I think i found the problem.
One should create connection from proxy_datasource not pool_datasource
var pool_con = pool_ds.getConnection(); //WRONG
var pool_con = proxy_ds.getConnection(); //RIGHT
And also, on creating statements,
one should use proxy connections (proxy_con) to create statements, not main connection (pool_con)!
I used way 3. And the results were singleton. I think java melody detects that it has a datasource already; and does not log twice.
full code: at github-profile
full code: at github-sql-conn

Problem with unit tests: Junit and the Mockito framework

I am learning to do unit and double tests with Junit and the Mockito framework, but I am not getting the expected result in a specific test with 'mocks'. I do an assertThat that should return positive test, instead, it returns an error that says Mockito cannot mock this class. It is a class called 'Console' that must print and collect values ​​from the user's keyboard, but of course, in unit tests this should be 'mocked' to avoid 'intervening test' in antipattern, where the test asks for data to the developer, that is, I need to 'mock up' a user input. This 'Console' class is like a small facade of the typical java BufferedReader class.
I pass you the classes involved:
Console:
import java.io.BufferedReader;
import java.io.InputStreamReader;
public class Console {
private BufferedReader bufferedReader = new BufferedReader(new InputStreamReader(System.in));
public String readString(String title) {
String input = null;
boolean ok = false;
do {
this.write(title);
try {
input = this.bufferedReader.readLine();
ok = true;
} catch (Exception ex) {
this.writeError("characte string");
}
} while (!ok);
return input;
}
public int readInt(String title) {
int input = 0;
boolean ok = false;
do {
try {
input = Integer.parseInt(this.readString(title));
ok = true;
} catch (Exception ex) {
this.writeError("integer");
}
} while (!ok);
return input;
}
public char readChar(String title) {
char charValue = ' ';
boolean ok = false;
do {
String input = this.readString(title);
if (input.length() != 1) {
this.writeError("character");
} else {
charValue = input.charAt(0);
ok = true;
}
} while (!ok);
return charValue;
}
public void writeln() {
System.out.println();
}
public void write(String string) {
System.out.print(string);
}
public void writeln(String string) {
System.out.println(string);
}
public void write(char character) {
System.out.print(character);
}
public void writeln(int integer) {
System.out.println(integer);
}
private void writeError(String format) {
System.out.println("FORMAT ERROR! " + "Enter a " + format + " formatted value.");
}
}
ConsoleTest:
import org.junit.jupiter.api.BeforeEach;
import org.junit.jupiter.api.Test;
import org.mockito.InjectMocks;
import org.mockito.Mock;
import java.io.BufferedReader;
import java.io.IOException;
import static org.hamcrest.CoreMatchers.is;
import static org.hamcrest.MatcherAssert.assertThat;
import static org.mockito.Mockito.*;
import static org.mockito.MockitoAnnotations.initMocks;
public class ConsoleTest {
#Mock
private BufferedReader bufferedReader;
#InjectMocks
private Console console;
#BeforeEach
public void before(){
initMocks(this);
//this.console = new Console();
}
#Test
public void givenConsoleWhenReadStringThenValue() throws IOException {
String string = "yes";
when(this.bufferedReader.readLine()).thenReturn(string);
assertThat(this.console.readString("title"), is(string));
}
}
pom.xml:
<?xml version="1.0" encoding="UTF-8"?>
<project
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns="http://maven.apache.org/POM/4.0.0"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<modules>
</modules>
<artifactId>solution.java.swing.socket.sql</artifactId>
<groupId>usantatecla.tictactoe</groupId>
<version>0.0.1-SNAPSHOT</version>
<packaging>pom</packaging>
<name>${project.groupId}.${project.artifactId}</name>
<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
<maven.compiler.source>1.7</maven.compiler.source>
<maven.compiler.target>1.7</maven.compiler.target>
</properties>
<dependencies>
<dependency>
<groupId>org.junit.jupiter</groupId>
<artifactId>junit-jupiter-api</artifactId>
<version>5.6.2</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.mockito</groupId>
<artifactId>mockito-inline</artifactId>
<version>3.6.0</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.mockito</groupId>
<artifactId>mockito-junit-jupiter</artifactId>
<version>3.6.0</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.hamcrest</groupId>
<artifactId>hamcrest</artifactId>
<version>2.2</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>4.13.1</version>
<scope>test</scope>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<version>3.5.1</version>
<configuration>
<source>1.8</source>
<target>1.8</target>
</configuration>
</plugin>
<plugin>
<groupId>org.jacoco</groupId>
<artifactId>jacoco-maven-plugin</artifactId>
<version>0.8.3</version>
<executions>
<execution>
<id>default-prepare-agent</id>
<goals>
<goal>prepare-agent</goal>
</goals>
</execution>
<execution>
<id>default-report</id>
<phase>post-integration-test</phase>
<goals>
<goal>report</goal>
</goals>
</execution>
</executions>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-surefire-plugin</artifactId>
<version>2.22.2</version>
<dependencies>
<dependency>
<groupId>org.junit.platform</groupId>
<artifactId>junit-platform-surefire-provider</artifactId>
<version>1.2.0</version>
</dependency>
</dependencies>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-failsafe-plugin</artifactId>
<version>2.22.2</version>
<dependencies>
<dependency>
<groupId>org.junit.platform</groupId>
<artifactId>junit-platform-surefire-provider</artifactId>
<version>1.2.0</version>
</dependency>
</dependencies>
<executions>
<execution>
<goals>
<goal>integration-test</goal>
<goal>verify</goal>
</goals>
</execution>
</executions>
</plugin>
</plugins>
</build>
</project>
Thanks and greetings to the community!
I personally am not a huge fun of Mockito, I prefer to have full control of my classes using an interface and two implementations (one for production and one for test).
So this doesn't respond directly to your question about Mockito but it allows you to perfectly control the behaviour of your code without the need to use another framework.
You may define a very simple interface:
public interface Reader {
String readLine();
}
Then, you use that interface in your class Console:
public class Console {
private final Reader reader;
public Console(Reader reader) {
this.reader = reader;
}
//Replace all your this.bufferedReader.readLine() by this.reader.readLine();
}
So, in your production code you can use the real implementation with the Buffered reader:
public class ProductionReader implements Reader {
private final BufferedReader bufferedReader = new BufferedReader(...);
#Override
public String readLine() {
this.bufferedReader.readLine();
}
}
Console console = new Console(new ProductionReader());
... While in your tests you can use a test implementation:
public class TestReader implements Reader {
#Override
public String readLine() {
return "Yes";
}
}
Console console = new Console(new TestReader());
Note that while in your specific case you may mock the behaviour using Mockito, there are a lot of other cases when you will need a more complex approach and the a ove will allow you to have full control and full debuggability of your code without the need of adding any dependency.

AspectJ plugin builds fine but at runtime annotations don't work

I am using the AspectJ Maven plugin to build my project and use an AspectLibrary, which is a jar in which I have my aspects defined.
Here is the Aspect that I am trying to use
#Around("execution(* *(..))&&#annotation(com.cisco.commerce.pricing.lp.commons.util.annotations.TimeMe)")
public Object timeMeAroundAspect(ProceedingJoinPoint proceedingJoinPoint) throws Throwable {// NOSONAR
Timer timer = Timer.instance().start();
MethodSignature signature = (MethodSignature) proceedingJoinPoint.getSignature();
Method method = signature.getMethod();
TimeMe timeMeAnnotation = method.getAnnotation(TimeMe.class);
String name = timeMeAnnotation.name();
boolean log = timeMeAnnotation.log();
boolean addToMetrics = timeMeAnnotation.addToMetrics();
Object response = null;
try {
response = proceedingJoinPoint.proceed();
} finally {
try {
Long timeTaken = timer.timeTaken();
if (log) {
LOGGER.info("MethodName: {} Time taken: {}", name, timeTaken);
}
if (addToMetrics) {
ExecutionDetailsUtil.addMethodExecutionTime(name, timeTaken);
}
} catch (Exception e) {
LOGGER.warn("Exception while trying to log time", e);
}
}
return response;
}
This code is in a jar file, which I am using as the aspectLibrary in my pom
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>aspectj-maven-plugin</artifactId>
<version>1.7</version>
<configuration>
<showWeaveInfo>true</showWeaveInfo>
<verbose>true</verbose>
<encoding>UTF-8</encoding>
<source>${java.source-target.version}</source>
<target>${java.source-target.version}</target>
<Xlint>ignore</Xlint>
<aspectLibraries>
<aspectLibrary>
<groupId>it.cvc.ciscocommerce.lps.lp-commons</groupId>
<artifactId>lp-commons</artifactId>
</aspectLibrary>
</aspectLibraries>
<complianceLevel>${java.source-target.version}</complianceLevel>
</configuration>
<executions>
<execution>
<phase>process-sources</phase>
<goals>
<goal>compile</goal>
</goals>
</execution>
</executions>
<dependencies>
<dependency>
<groupId>org.aspectj</groupId>
<artifactId>aspectjtools</artifactId>
<version>${aspectj.version}</version>
</dependency>
</dependencies>
</plugin>
Below is my annotation defintion
#Target(ElementType.METHOD)
#Retention(RetentionPolicy.RUNTIME)
public #interface TimeMe {
public String name();
public boolean log() default true;
public boolean addToMetrics() default true;
}
Here is the snippet where I am trying to use this annotation (in a different code base which uses the above jar as a dependency)
#TimeMe(name = "classifyLine")
private void classifyLine(PricingObject pricingObject,
PricingLineObject pricingLineObject, LineTypes lineTypes) {
//logic
}
My build runs fine and prints the following in the MAVEN Console
[INFO] Join point 'method-execution(void com.cisco.pricing.lps.main.ListPriceService.classifyLine(com.cisco.pricing.lps.bean.PricingObject, com.cisco.pricing.lps.bean.PricingLineObject, com.cisco.pricing.lps.dto.LineTypes))' in Type 'com.cisco.pricing.lps.main.ListPriceService' (ListPriceService.java:235) advised by around advice from 'com.cisco.commerce.pricing.lp.commons.util.logging.LoggingAspectDefiner' (lp-commons-2019.03.01-SNAPSHOT.jar!LoggingAspectDefiner.class(from LoggingAspectDefiner.java))
I exploded the war file and looked at the class files generated. I have the following AjcClosure1 class generated for the java file where I used the annotation.
public class ListPriceService$AjcClosure1 extends AroundClosure {
public Object run(Object[] paramArrayOfObject) {
Object[] arrayOfObject = this.state;
ListPriceService.classifyLine_aroundBody0((ListPriceService)
arrayOfObject[0],
(PricingObject)arrayOfObject[1],
(PricingLineObject)arrayOfObject[2], (LineTypes)arrayOfObject[3],
(JoinPoint)arrayOfObject[4]);return null;
}
public ListPriceService$AjcClosure1(Object[] paramArrayOfObject)
{
super(paramArrayOfObject);
}
}
And in the java class file, where I use the annotation, I see no changes to the classifyLine method.
However, when I run my application, the annotation is not working. It doesn't execute the Aspect I have defined in the jar.
I have no clue why. Is my pattern not matching? It matches and works fine in a Spring application but not in this non Spring application.

Where to generate ressources into during Annotation processing in a maven workflow?

I have a maven project with several modules, i.e.
<module>backend</module> <!-- provides annotations -->
<module>annotationProcessor</module> <!-- processes ann., generates files -->
<module>mainprog</module> <!-- uses annotations/files -->
backend provides an annotation class MyAnnotation for annotating classes.
mainprog contains Mainprog.java which defines a class with a #MyAnnotation annotation. At runtime this class tries to load a file via getResourceAsStream("Mainprog.properties") (which does not exist yet).
The annotationProcessor has a class MyAnnotationProcessor which maven executes and finds my annotations.
The processor should create the file Mainprog.properties from information gathered by the annotation processor.
I can not manage to put the properties file in a place where it is found when executing/testing Mainprog.
Where should I generate the file to into, being in a maven workflow?
How do I tell maven this file is used in tests or at runtime? Eventually
is has to be packaged in the jar.
Mainprog
package demo;
#MyAnnotation
public class Mainprog {
}
Use the properties file
Currently I do it in the testing class, but later this will be in the class itself.
package demo;
import java.io.IOException;
import java.io.InputStream;
import java.util.Properties;
import org.junit.Test;
public class MainprogTest {
Class testclass = Mainprog.class;
#Test
public void testPropertiesFile() throws IOException {
String fn = testclass.getCanonicalName().replace('.', '/') + ".properties";
System.err.println("loading: '"+fn+"'");
InputStream in = getClass().getResourceAsStream(fn);
Properties prop = new Properties();
prop.load(in);
in.close();
}
}
This currently runs as such:
loading: 'demo/Mainprog.properties'
Tests in error:
testPropertiesFile(demo.MainprogTest)
with a NullPointerException, because the stream returns null, i.e. does not exist.
Despite the file is there (but is it in the right place?):
towi#havaloc:~/git/project/mainprog$ find . -name Mainprog.properties
./src/java/demo/Mainprog.properties
./target/classes/demo/Mainprog.properties
Processor
package demo;
import com.github.javaparser.*;
import com.github.javaparser.ast.*;
import javax.annotation.processing.*;
import javax.lang.model.element.*;
#SupportedAnnotationTypes({"demo.MyAnnotation"})
public class MyAnnotationProcessor extends AbstractProcessor {
#Override
public boolean process(Set<? extends TypeElement> elements, RoundEnvironment env) {
for (TypeElement te : elements) {
for (Element e : env.getElementsAnnotatedWith(te))
{
processAnnotation(e);
}
}
return true;
}
private void processAnnotation(Element elem) {
final TypeElement classElem = (TypeElement) elem;
...
final String prefix = System.getProperty("user.dir").endsWith("/"+"mainprog") ? "." : "mainprog";
final String className = classElem.getQualifiedName().toString();
String fileName = prefix + "/src/java/" + className.replace('.', '/') + ".java";
FileInputStream in = new FileInputStream(fileName);
final CompilationUnit cu = JavaParser.parse(in);
final CallGraph graph = ...
generateInfoProperties(classElem, fileName, graph);
}
private void generateInfoProperties(TypeElement classElem, String inFilename, CallGraph graph) throws IOException {
final File outFile = new File(inFilename
.replace("/src/java/", "/src/java/") // <<< WHERE TO ???
.replace(".java", ".properties"));
outFile.getParentFile().mkdirs();
try (PrintWriter writer = new PrintWriter(outFile, "UTF-8")) {
final Properties ps = new Properties();
graph.storeAsProperties(ps);
ps.store(writer, inFilename);
writer.close();
}
}
}
As you can see, there is a lot of guesswork and "heuristics" going on when handling directory names. All that System.getProperty("user.dir") and replace("/src/java/", "/src/java/") is probably wrong, but what is better?
maven
In Maven I have 4 poms, of course
pom.xml
backend/pom.xml
annotationProcessor/pom.xml
mainprog/pom.xml
Only one of seems to me contains anything of note, i.e., the execution of the annotation processor in mainprog/pom.xml:
<project>
....
<dependencies>
<dependency>
<groupId>project</groupId>
<artifactId>backend</artifactId>
<scope>compile</scope>
</dependency>
<dependency>
<groupId>project</groupId>
<artifactId>annotationProcessor</artifactId>
<scope>compile</scope>
</dependency>
</dependencies>
<build>
<finalName>mainprog</finalName>
<sourceDirectory>src/java</sourceDirectory>
<resources>
<resource>
<directory>${basedir}/src/conf</directory>
<targetPath>META-INF</targetPath>
</resource>
<resource>
<directory>${basedir}/web</directory>
</resource>
<resource>
<directory>${basedir}/src/java</directory>
<includes>
<include>**/*.xml</include>
<include>**/*.properties</include>
<include>**/*.wsdl</include>
<include>**/*.xsd</include>
</includes>
</resource>
</resources>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<configuration>
<annotationProcessors>
<annotationProcessor>demo.MyAnnotationProcessor
</annotationProcessor>
</annotationProcessors>
</configuration>
</plugin>
...
</plugins>
</build>
</project>
I thought by generating the file into /src/java/ and then having <resource><directory>${basedir}/src/java and <include>**/*.properties is enough, but it does not seem so. Why is that?
Use the provided Filer, which can be obtained using processingEnv.getFiler(). If you create a source file using it, the compiler will compile it on the next round and you won't need to worry about configuring Maven to compile generated source files.

Hadoop: java.lang.IncompatibleClassChangeError: Found interface org.apache.hadoop.mapreduce.JobContext, but class was expected

My MapReduce jobs runs ok when assembled in Eclipse with all possible Hadoop and Hive jars included in Eclipse project as dependencies. (These are the jars that come with single node, local Hadoop installation).
Yet when trying to run the same program assembled using Maven project (see below) I get:
Exception in thread "main" java.lang.IncompatibleClassChangeError: Found interface org.apache.hadoop.mapreduce.JobContext, but class was expected
This exception happens when program is assembled using the following Maven project:
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>com.bigdata.hadoop</groupId>
<artifactId>FieldCounts</artifactId>
<version>0.0.1-SNAPSHOT</version>
<packaging>jar</packaging>
<name>FieldCounts</name>
<url>http://maven.apache.org</url>
<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
</properties>
<dependencies>
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>3.8.1</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-hdfs</artifactId>
<version>2.2.0</version>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-common</artifactId>
<version>2.2.0</version>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-mapreduce-client-jobclient</artifactId>
<version>2.2.0</version>
</dependency>
<dependency>
<groupId>org.apache.hive.hcatalog</groupId>
<artifactId>hcatalog-core</artifactId>
<version>0.12.0</version>
</dependency>
<dependency>
<groupId>com.google.guava</groupId>
<artifactId>guava</artifactId>
<version>16.0.1</version>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<version>2.3.2</version>
<configuration>
<source>${jdk.version}</source>
<target>${jdk.version}</target>
</configuration>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-assembly-plugin</artifactId>
<executions>
<execution>
<goals>
<goal>attached</goal>
</goals>
<phase>package</phase>
<configuration>
<descriptorRefs>
<descriptorRef>jar-with-dependencies</descriptorRef>
</descriptorRefs>
<archive>
<manifest>
<mainClass>com.bigdata.hadoop.FieldCounts</mainClass>
</manifest>
</archive>
</configuration>
</execution>
</executions>
</plugin>
</plugins>
</build>
</project>
* Please advise where and how to find compatible Hadoop jars? *
[update_1]
I am running Hadoop 2.2.0.2.0.6.0-101
As I have found here: https://github.com/kevinweil/elephant-bird/issues/247
Hadoop 1.0.3: JobContext is a class
Hadoop 2.0.0: JobContext is an interface
In my pom.xml I have three jars with version 2.2.0
hadoop-hdfs 2.2.0
hadoop-common 2.2.0
hadoop-mapreduce-client-jobclient 2.2.0
hcatalog-core 0.12.0
The only exception is hcatalog-core which version is 0.12.0, I could not find any more recent version of this jar and I need it!
How can I find which of these 4 jars produces java.lang.IncompatibleClassChangeError: Found interface org.apache.hadoop.mapreduce.JobContext, but class was expected ?
Please, give me an idea how to solve this. (The only solution I see is to compile everything from source!)
[/update_1]
Full text of my MarReduce Job:
package com.bigdata.hadoop;
import java.io.IOException;
import java.util.*;
import org.apache.hadoop.conf.*;
import org.apache.hadoop.io.*;
import org.apache.hadoop.mapreduce.*;
import org.apache.hadoop.util.*;
import org.apache.hcatalog.mapreduce.*;
import org.apache.hcatalog.data.*;
import org.apache.hcatalog.data.schema.*;
import org.apache.log4j.Logger;
public class FieldCounts extends Configured implements Tool {
public static class Map extends Mapper<WritableComparable, HCatRecord, TableFieldValueKey, IntWritable> {
static Logger logger = Logger.getLogger("com.foo.Bar");
static boolean firstMapRun = true;
static List<String> fieldNameList = new LinkedList<String>();
/**
* Return a list of field names not containing `id` field name
* #param schema
* #return
*/
static List<String> getFieldNames(HCatSchema schema) {
// Filter out `id` name just once
if (firstMapRun) {
firstMapRun = false;
List<String> fieldNames = schema.getFieldNames();
for (String fieldName : fieldNames) {
if (!fieldName.equals("id")) {
fieldNameList.add(fieldName);
}
}
} // if (firstMapRun)
return fieldNameList;
}
#Override
protected void map( WritableComparable key,
HCatRecord hcatRecord,
//org.apache.hadoop.mapreduce.Mapper
//<WritableComparable, HCatRecord, Text, IntWritable>.Context context)
Context context)
throws IOException, InterruptedException {
HCatSchema schema = HCatBaseInputFormat.getTableSchema(context.getConfiguration());
//String schemaTypeStr = schema.getSchemaAsTypeString();
//logger.info("******** schemaTypeStr ********** : "+schemaTypeStr);
//List<String> fieldNames = schema.getFieldNames();
List<String> fieldNames = getFieldNames(schema);
for (String fieldName : fieldNames) {
Object value = hcatRecord.get(fieldName, schema);
String fieldValue = null;
if (null == value) {
fieldValue = "<NULL>";
} else {
fieldValue = value.toString();
}
//String fieldNameValue = fieldName+"."+fieldValue;
//context.write(new Text(fieldNameValue), new IntWritable(1));
TableFieldValueKey fieldKey = new TableFieldValueKey();
fieldKey.fieldName = fieldName;
fieldKey.fieldValue = fieldValue;
context.write(fieldKey, new IntWritable(1));
}
}
}
public static class Reduce extends Reducer<TableFieldValueKey, IntWritable,
WritableComparable, HCatRecord> {
protected void reduce( TableFieldValueKey key,
java.lang.Iterable<IntWritable> values,
Context context)
//org.apache.hadoop.mapreduce.Reducer<Text, IntWritable,
//WritableComparable, HCatRecord>.Context context)
throws IOException, InterruptedException {
Iterator<IntWritable> iter = values.iterator();
int sum = 0;
// Sum up occurrences of the given key
while (iter.hasNext()) {
IntWritable iw = iter.next();
sum = sum + iw.get();
}
HCatRecord record = new DefaultHCatRecord(3);
record.set(0, key.fieldName);
record.set(1, key.fieldValue);
record.set(2, sum);
context.write(null, record);
}
}
public int run(String[] args) throws Exception {
Configuration conf = getConf();
args = new GenericOptionsParser(conf, args).getRemainingArgs();
// To fix Hadoop "META-INFO" (http://stackoverflow.com/questions/17265002/hadoop-no-filesystem-for-scheme-file)
conf.set("fs.hdfs.impl",
org.apache.hadoop.hdfs.DistributedFileSystem.class.getName());
conf.set("fs.file.impl",
org.apache.hadoop.fs.LocalFileSystem.class.getName());
// Get the input and output table names as arguments
String inputTableName = args[0];
String outputTableName = args[1];
// Assume the default database
String dbName = null;
Job job = new Job(conf, "FieldCounts");
HCatInputFormat.setInput(job,
InputJobInfo.create(dbName, inputTableName, null));
job.setJarByClass(FieldCounts.class);
job.setMapperClass(Map.class);
job.setReducerClass(Reduce.class);
// An HCatalog record as input
job.setInputFormatClass(HCatInputFormat.class);
// Mapper emits TableFieldValueKey as key and an integer as value
job.setMapOutputKeyClass(TableFieldValueKey.class);
job.setMapOutputValueClass(IntWritable.class);
// Ignore the key for the reducer output; emitting an HCatalog record as
// value
job.setOutputKeyClass(WritableComparable.class);
job.setOutputValueClass(DefaultHCatRecord.class);
job.setOutputFormatClass(HCatOutputFormat.class);
HCatOutputFormat.setOutput(job,
OutputJobInfo.create(dbName, outputTableName, null));
HCatSchema s = HCatOutputFormat.getTableSchema(job);
System.err.println("INFO: output schema explicitly set for writing:"
+ s);
HCatOutputFormat.setSchema(job, s);
return (job.waitForCompletion(true) ? 0 : 1);
}
public static void main(String[] args) throws Exception {
String classpath = System.getProperty("java.class.path");
//System.out.println("*** CLASSPATH: "+classpath);
int exitCode = ToolRunner.run(new FieldCounts(), args);
System.exit(exitCode);
}
}
And class for complex key:
package com.bigdata.hadoop;
import java.io.DataInput;
import java.io.DataOutput;
import java.io.IOException;
import org.apache.hadoop.io.WritableComparable;
import com.google.common.collect.ComparisonChain;
public class TableFieldValueKey implements WritableComparable<TableFieldValueKey> {
public String fieldName;
public String fieldValue;
public TableFieldValueKey() {} //must have a default constructor
//
public void readFields(DataInput in) throws IOException {
fieldName = in.readUTF();
fieldValue = in.readUTF();
}
public void write(DataOutput out) throws IOException {
out.writeUTF(fieldName);
out.writeUTF(fieldValue);
}
public int compareTo(TableFieldValueKey o) {
return ComparisonChain.start().compare(fieldName, o.fieldName)
.compare(fieldValue, o.fieldValue).result();
}
}
Hadoop has gone through a huge code refactoring from Hadoop 1.0 to Hadoop 2.0. One side effect
is that code compiled against Hadoop 1.0 is not compatible with Hadoop 2.0 and vice-versa.
However source code is mostly compatible and thus one just need to recompile code with target
Hadoop distribution.
The exception "Found interface X, but class was expected" is very common when you're running
code that is compiled for Hadoop 1.0 on Hadoop 2.0 or vice-versa.
You can find the correct hadoop version used in the cluster, then specify that hadoop version in the pom.xml file Build your project with the same version of hadoop used in the cluster and deploy it.
You need to recompile "hcatalog-core" to support Hadoop 2.0.0.
Currently "hcatalog-core" only supports Hadoop 1.0
Obviously, you have versions incompatibility between you Hadoop and Hive versions. You need to upgrade (or downgrade) your Hadoop version or Hive version.
This is due the incompatibility between Hadoop 1 and Hadoop 2.
Look for entries like this
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-core</artifactId>
<version>1.2.1</version>
</dependency>
in your pom.xml.
These define the hadoop version to use. Change them or remove them as per your requirements.
Even I ran through this problem.
Was trying use HCatMultipleInputs with hive-hcatalog-core-0.13.0.jar. We are using hadoop 2.5.1.
The following code change helped me fix the issue:
//JobContext ctx = new JobContext(conf,jobContext.getJobID());
JobContext ctx = new Job(conf);

Categories

Resources