I have installed sonar server on my localhost. And I am able to run and analyse the java project. Even i have installed sonar plugin on eclipse.
But I want to run sonar from my java project(like simple java class) and should retrieve the sonar results and able to save it in database. I searched for the tutorial but unable to find the answer for this. Please anyone can give sample code or resource where I can gain knowledge to overcome this task.
import javax.annotation.Resource;
import org.sonar.wsclient.Host;
import org.sonar.wsclient.Sonar;
import org.sonar.wsclient.connectors.HttpClient4Connector;
import org.sonar.wsclient.services.*;
public class SonarTask1{
public static void main(String[] args) {
//public void Hi(){
String url = "http://localhost:9000";
String login = "admin";
String password = "admin";
Sonar sonar = new Sonar(new HttpClient4Connector(new Host(url, login, password)));
String projectKey = "java-sonar-runner-simple";
String manualMetricKey = "burned_budget";
sonar.create(ManualMeasureCreateQuery.create(projectKey, manualMetricKey).setValue(50.0));
for (ManualMeasure manualMeasure : sonar.findAll(ManualMeasureQuery.create(projectKey))) {
System.out.println("Manual measure on project: " + manualMeasure);
}
}
}
There are 2 things you can do from a Java program:
launch a Sonar analysis: look into the Sonar Ant Task (in method #launchAnalysis) to see how to do that very easily.
retrieve results from the Sonar server: check the Web API for that purpose
About the " java.lang.NoClassDefFoundError: org/apache/http/client/methods/HttpRequestBase" you need this dependencies (Sonar WS uses them)
<dependency>
<groupId>org.apache.httpcomponents</groupId>
<artifactId>httpclient</artifactId>
<version>4.3.1</version>
</dependency>
<dependency>
<groupId>commons-logging</groupId>
<artifactId>commons-logging</artifactId>
<version>1.1.1</version>
</dependency>
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>jcl-over-slf4j</artifactId>
<version>1.7.7</version>
</dependency>
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-simple</artifactId>
<version>1.6.2</version>
</dependency>
Also, I'd recommend when using Eclipse IDE, update maven project and check force update of snapshots/releases, because somehow org.apache.httpclient wasn't being able to be recognized by classpath
Related
When practicing to realize a "hadoop RPC" sample, I keep getting this error.
According to previous similar questions and answers, I've checked the jar file in my classpath and got hadoop common.jar It shows that the jar file in the classpath contains hadoop.conf.Configuration.class.
And here's the code to build RPCServer:
*package rpc;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.ipc.RPC;
import org.apache.hadoop.ipc.Server;
public class RPCServer implements MyBizable {
public String doSomething(String str) {
return str;
}
public static void main(String[] args) throws Exception {
Server server = new RPC.Builder(new Configuration())
.setProtocol(MyBizable.class)
.setInstance(new RPCServer())
.setBindAddress("***.***.***.***")
.setPort(****)
.build();
server.start();
}
}*
And still this error shows up, anyone knows how to solve it?
Any help will be greatly appreciated! THX in advance!
Are you using Maven ?
if yes then add below dependencies.
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-client</artifactId>
<version>${hadoop.version}</version>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-common</artifactId>
<version>${hadoop.version}</version>
<scope>provided</scope>
</dependency>
I'm trying to run the Logistic Regression example (https://github.com/apache/spark/blob/master/examples/src/main/java/org/apache/spark/examples/ml/JavaLogisticRegressionWithElasticNetExample.java)
This is the code:
public final class GettingStarted {
public static void main(final String[] args) throws InterruptedException {
System.setProperty("hadoop.home.dir", "C:\\winutils");
SparkSession spark = SparkSession
.builder()
.appName("JavaLogisticRegressionWithElasticNetExample")
.config("spark.master", "local")
.getOrCreate();
// $example on$
// Load training data
Dataset<Row> training = spark.read().format("libsvm").load("data/mllib/sample_libsvm_data.txt");
LogisticRegression lr = new LogisticRegression()
.setMaxIter(10)
.setRegParam(0.3)
.setElasticNetParam(0.8);
// Fit the model
LogisticRegressionModel lrModel = lr.fit(training);
// Print the coefficients and intercept for logistic regression
System.out.println("Coefficients: "
+ lrModel.coefficients() + " Intercept: " + lrModel.intercept());
// We can also use the multinomial family for binary classification
LogisticRegression mlr = new LogisticRegression()
.setMaxIter(10)
.setRegParam(0.3)
.setElasticNetParam(0.8)
.setFamily("multinomial");
// Fit the model
LogisticRegressionModel mlrModel = mlr.fit(training);
// Print the coefficients and intercepts for logistic regression with multinomial family
System.out.println("Multinomial coefficients: " + lrModel.coefficientMatrix()
+ "\nMultinomial intercepts: " + mlrModel.interceptVector());
// $example off$
spark.stop();}}
I'm also using the same file of the example (https://github.com/apache/spark/blob/master/data/mllib/sample_libsvm_data.txt)
But I get these errors:
Exception in thread "main" java.lang.AssertionError: assertion failed: unsafe symbol CompatContext (child of package macrocompat) in runtime reflection universe
at scala.reflect.internal.Symbols$Symbol.<init>(Symbols.scala:184)
at scala.reflect.internal.Symbols$TypeSymbol.<init>(Symbols.scala:2984)
at scala.reflect.internal.Symbols$ClassSymbol.<init>(Symbols.scala:3176)
at scala.reflect.internal.Symbols$StubClassSymbol.<init>(Symbols.scala:3471)
at scala.reflect.internal.Symbols$Symbol.newStubSymbol(Symbols.scala:498)
at scala.reflect.internal.pickling.UnPickler$Scan.readExtSymbol$1(UnPickler.scala:258)
at scala.reflect.internal.pickling.UnPickler$Scan.readSymbol(UnPickler.scala:284)
at scala.reflect.internal.pickling.UnPickler$Scan.readSymbolRef(UnPickler.scala:649)
at scala.reflect.internal.pickling.UnPickler$Scan.readType(UnPickler.scala:417)
at scala.reflect.internal.pickling.UnPickler$Scan$LazyTypeRef$$anonfun$6.apply(UnPickler.scala:725)
at scala.reflect.internal.pickling.UnPickler$Scan$LazyTypeRef$$anonfun$6.apply(UnPickler.scala:725)
at scala.reflect.internal.pickling.UnPickler$Scan.at(UnPickler.scala:179)
at scala.reflect.internal.pickling.UnPickler$Scan$LazyTypeRef.completeInternal(UnPickler.scala:725)
at scala.reflect.internal.pickling.UnPickler$Scan$LazyTypeRef.complete(UnPickler.scala:749)
at scala.reflect.internal.Symbols$Symbol.info(Symbols.scala:1489)
at scala.reflect.runtime.SynchronizedSymbols$SynchronizedSymbol$$anon$12.scala$reflect$runtime$SynchronizedSymbols$SynchronizedSymbol$$super$info(SynchronizedSymbols.scala:162)
at scala.reflect.runtime.SynchronizedSymbols$SynchronizedSymbol$$anonfun$info$1.apply(SynchronizedSymbols.scala:127)
at scala.reflect.runtime.SynchronizedSymbols$SynchronizedSymbol$$anonfun$info$1.apply(SynchronizedSymbols.scala:127)
at scala.reflect.runtime.Gil$class.gilSynchronized(Gil.scala:19)
at scala.reflect.runtime.JavaUniverse.gilSynchronized(JavaUniverse.scala:16)
at scala.reflect.runtime.SynchronizedSymbols$SynchronizedSymbol$class.gilSynchronizedIfNotThreadsafe(SynchronizedSymbols.scala:123)
at scala.reflect.runtime.SynchronizedSymbols$SynchronizedSymbol$$anon$12.gilSynchronizedIfNotThreadsafe(SynchronizedSymbols.scala:162)
at scala.reflect.runtime.SynchronizedSymbols$SynchronizedSymbol$class.info(SynchronizedSymbols.scala:127)
at scala.reflect.runtime.SynchronizedSymbols$SynchronizedSymbol$$anon$12.info(SynchronizedSymbols.scala:162)
at scala.reflect.internal.Mirrors$RootsBase.ensureClassSymbol(Mirrors.scala:94)
at scala.reflect.internal.Mirrors$RootsBase.getClassByName(Mirrors.scala:102)
at scala.reflect.internal.Mirrors$RootsBase.getClassIfDefined(Mirrors.scala:114)
at scala.reflect.internal.Mirrors$RootsBase.getClassIfDefined(Mirrors.scala:111)
at scala.reflect.internal.Definitions$DefinitionsClass.BlackboxContextClass$lzycompute(Definitions.scala:496)
at scala.reflect.internal.Definitions$DefinitionsClass.BlackboxContextClass(Definitions.scala:496)
at scala.reflect.runtime.JavaUniverseForce$class.force(JavaUniverseForce.scala:305)
at scala.reflect.runtime.JavaUniverse.force(JavaUniverse.scala:16)
at scala.reflect.runtime.JavaUniverse.init(JavaUniverse.scala:147)
at scala.reflect.runtime.JavaUniverse.<init>(JavaUniverse.scala:78)
at scala.reflect.runtime.package$.universe$lzycompute(package.scala:17)
at scala.reflect.runtime.package$.universe(package.scala:17)
at org.apache.spark.sql.catalyst.ScalaReflection$.<init>(ScalaReflection.scala:40)
at org.apache.spark.sql.catalyst.ScalaReflection$.<clinit>(ScalaReflection.scala)
at org.apache.spark.sql.catalyst.encoders.RowEncoder$.org$apache$spark$sql$catalyst$encoders$RowEncoder$$serializerFor(RowEncoder.scala:74)
at org.apache.spark.sql.catalyst.encoders.RowEncoder$.apply(RowEncoder.scala:61)
at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:67)
at org.apache.spark.sql.SparkSession.baseRelationToDataFrame(SparkSession.scala:415)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:172)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:156)
at GettingStarted.main(GettingStarted.java:95)
Do you know what I'm wrong about?
EDIT:
I run it on IntelliJ, it is a Maven project and I added the dependencies:
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-core_2.11</artifactId>
<version>2.2.0</version>
</dependency>
<dependency>
<groupId>org.mongodb.spark</groupId>
<artifactId>mongo-spark-connector_2.11</artifactId>
<version>2.2.0</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-sql_2.11</artifactId>
<version>2.2.0</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-mllib_2.10</artifactId>
<version>2.2.0</version>
</dependency>
tl;dr As soon as you start seeing errors internal to scala, mentionning reflection universe, think incompatible scala versions.
Your scala versions on your libs do not match one another (2.10 and 2.11).
You should align all on your actual scala version.
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-sql_2.11</artifactId> <!-- This is scala v2.11 -->
<version>2.2.0</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-mllib_2.10</artifactId> <!-- This is scala v2.10 -->
<version>2.2.0</version>
</dependency>
I am trying to convert JSON to JAVA object with the use of GSON in MAVEN, I am following a youtube video for guidance - https://www.youtube.com/watch?v=Vqgghm9pWe0 , however theres an error which occurs when in the Main Class the error is - package com.squareup.okhttp3 doesn't exist. The code is below:
Java
package com.codebeasty.json;
import com.squareup.okhttp3.OkHttpClient;
public class Main {
private static OkHttpClient client = new OkHttpClient();
public static void main (String [] args)
{
}
}
I even put in the dependency in pom.xml:
<dependencies>
<dependency>
<groupId>com.squareup.okhttp3</groupId>
<artifactId>okhttp</artifactId>
<version>3.4.2</version>
</dependency>
<dependency>
<groupId>com.google.code.gson</groupId>
<artifactId>gson</artifactId>
<version>2.8.0</version>
</dependency>
</dependencies>
I dont understand why it doesn't recognize the com.squareup. Is there something extra I may need to download? I have downloaded the JAR from this website - http://square.github.io/okhttp/ and also tried building the project with dependencies. Please help :(
Maven repo has latest version 3.4.1. Trying changing version in pom or install downloaded in your local using
mvn install:install-file -Dfile=path/to/okhttp-3.4.2.jar -DgroupId=com.squareup.okhttp3 -DartifactId=okhttp -Dversion=3.4.2 -Dpackaging=jar
I am using the below code to convert docx to pdf.
public static void main(String[] args) {
File inputdocxfile = new File(System.getProperty("user.dir") + "/src/test/resources/files/output/");
File outputpdffile = new File(System.getProperty("user.dir") + "/src/test/resources/files/output/"
+ "CustomerOutputdocx.pdf");
IConverter converter = LocalConverter.builder().baseFolder(inputdocxfile)
.workerPool(20, 25, 2, TimeUnit.SECONDS).processTimeout(5, TimeUnit.SECONDS).build();
Future<Boolean> conversion = converter.convert(inputdocxfile).as(DocumentType.MS_WORD).to(outputpdffile)
.as(DocumentType.PDF).prioritizeWith(1000).schedule();
}
and I am getting this below exception. I am using the same code as mentioned in the documents4j official website.
Exception in thread "main" java.lang.IllegalStateException: The application was started without any registered or class-path discovered converters.
at com.documents4j.conversion.ExternalConverterDiscovery.validate(ExternalConverterDiscovery.java:68)
at com.documents4j.conversion.ExternalConverterDiscovery.loadConfiguration(ExternalConverterDiscovery.java:85)
at com.documents4j.conversion.DefaultConversionManager.<init>(DefaultConversionManager.java:22)
at com.documents4j.job.LocalConverter.makeConversionManager(LocalConverter.java:74)
at com.documents4j.job.LocalConverter.<init>(LocalConverter.java:47)
at com.documents4j.job.LocalConverter$Builder.build(LocalConverter.java:162)
at com.apakgroup.docgen.converters.ConvertToPdf.main(ConvertToPdf.java:19)
Exception in thread "Shutdown hook: com.documents4j.job.LocalConverter" java.lang.NullPointerException
at com.documents4j.job.LocalConverter.shutDown(LocalConverter.java:95)
at com.documents4j.job.ConverterAdapter$ConverterShutdownHook.run(ConverterAdapter.java:125)
I had missed few dependencies and I also had to use a newer version of commons-io. I previously used commons-io 1.3 but later I came to know that Documents4J uses commons-io 1.4 or later and when commons-io version was changed it worked. And If anyone wants to know the dependencies I used to convert docx file to pdf in java. These are the ones.
<dependency>
<groupId>com.documents4j</groupId>
<artifactId>documents4j-api</artifactId>
<version>1.0.0</version>
</dependency>
<dependency>
<groupId>com.documents4j</groupId>
<artifactId>documents4j-local</artifactId>
<version>1.0.0</version>
</dependency>
<dependency>
<groupId>com.documents4j</groupId>
<artifactId>documents4j-transformer-msoffice-word</artifactId>
<version>1.0.0</version>
</dependency>
<dependency>
<groupId>commons-io</groupId>
<artifactId>commons-io</artifactId>
<version>2.4</version>
</dependency>
<dependency>
<groupId>com.google.guava</groupId>
<artifactId>guava</artifactId>
<version>18.0</version>
</dependency>
And let me remind you. This library works only on machines which have MS Office installed in it; as the library uses the application itself to convert docx to pdf. If someone is hosting this code on a server there is also a remote converter which you can use instead of the Local converter as shown here.
I've found the osmosis libs in maven 3 repository which I inserted into the pom.xml of my project.
<dependency>
<groupId>org.openstreetmap.osmosis</groupId>
<artifactId>osmosis-core</artifactId>
<version>0.44.1</version>
</dependency>
And now I try to import a *.osm.pbf data file into the PostGres / PostGIS database. The commentaries for the main method inside the Osmosis class says that you should write your own pipelines.
Does anyone know a example how to implement a complete functionality for importing data?
I've tried the Osmosis.run(args) method, but this seems not to accept my arguments.
Additional notes:
My approach looks like this so far:
String args[] = { "--read-pbf file=" + DOWNLOAD_STUTTGART_PBF, "--log-progress",
"--write-pgsql host=\"localhost\" port=\"5432\"" +
"database=\"myDatabase\" user=\"admin\" password=\"pw123\"" };
Osmosis.run(args);
The output looks like this:
07:36:53.901 [main] INFO o.j.p.standard.StandardPluginManager - plug-in started - org.openstreetmap.osmosis.core.plugin.Core#0.43.0.1-49-gb18e1e9-dirty-SNAPSHOT
Okt 22, 2015 7:36:53 AM org.openstreetmap.osmosis.core.Osmosis run
INFORMATION: Preparing pipeline.
No data is imported into the database. Unfortunately documentation is non existing, or I just cannot find the documentation.
Now I got the solution:
String workingDir = System.getProperty("user.dir") + File.separator;
String args[] = { "--read-pbf", "file=" + workingDir + DOWNLOAD_STUTTGART_PBF, "--log-progress",
"--write-pgsql", "host=localhost:5432", "database=myDatabase", "user=admin",
"password=pw123" };
Osmosis.run(args);
The clue is to deliver all parameters separately in the array.
Also you need to include some more dependencies:
<!-- OSM Osmosis Importer Libs -->
<dependency>
<groupId>org.openstreetmap.osmosis</groupId>
<artifactId>osmosis-core</artifactId>
<version>0.44.1</version>
</dependency>
<dependency>
<groupId>org.openstreetmap.osmosis</groupId>
<artifactId>osmosis-pbf</artifactId>
<version>0.44.1</version>
</dependency>
<dependency>
<groupId>org.openstreetmap.osmosis</groupId>
<artifactId>osmosis-pbf2</artifactId>
<version>0.44.1</version>
</dependency>
<dependency>
<groupId>org.openstreetmap.osmosis</groupId>
<artifactId>osmosis-osm-binary</artifactId>
<version>0.44.1</version>
</dependency>
<dependency>
<groupId>org.openstreetmap.osmosis</groupId>
<artifactId>osmosis-extract</artifactId>
<version>0.44.1</version>
</dependency>
<dependency>
<groupId>org.openstreetmap.osmosis</groupId>
<artifactId>osmosis-hstore-jdbc</artifactId>
<version>0.44.1</version>
</dependency>
<dependency>
<groupId>org.openstreetmap.osmosis</groupId>
<artifactId>osmosis-dataset</artifactId>
<version>0.44.1</version>
</dependency>
<dependency>
<groupId>org.openstreetmap.osmosis</groupId>
<artifactId>osmosis-pgsnapshot</artifactId>
<version>0.44.1</version>
</dependency>
I wish the developers would also deliver excellent documentation besides the excellent code.