I'm getting the following exception when using cosmosdb sdk for Java:
Exception in thread "main" java.lang.NoSuchFieldError: ALLOW_TRAILING_COMMA
at com.microsoft.azure.cosmosdb.internal.Utils.<clinit>(Utils.java:75)
at com.microsoft.azure.cosmosdb.rx.internal.RxDocumentClientImpl.<clinit>(RxDocumentClientImpl.java:132)
at com.microsoft.azure.cosmosdb.rx.AsyncDocumentClient$Builder.build(AsyncDocumentClient.java:224)
at Program2.<init>(Program2.java:25)
at Program2.main(Program2.java:30)
I'm just trying to connect to the CosmosDB using AsyncDocumentClient. The exception occurs in that moment.
executorService = Executors.newFixedThreadPool(100);
scheduler = Schedulers.from(executorService);
client = new AsyncDocumentClient.Builder()
.withServiceEndpoint("[cosmosurl]")
.withMasterKeyOrResourceToken("[mykey]")
.withConnectionPolicy(ConnectionPolicy.GetDefault())
.withConsistencyLevel(ConsistencyLevel.Eventual)
.build();
I heard about some library conflict but I haven't found the properly fix.
Thanks!
Please refer to my working sample.
Java code:
import com.microsoft.azure.cosmosdb.ConnectionPolicy;
import com.microsoft.azure.cosmosdb.ConsistencyLevel;
import com.microsoft.azure.cosmosdb.rx.AsyncDocumentClient;
public class test {
public static void main(String[] args) throws Exception {
AsyncDocumentClient client = new AsyncDocumentClient.Builder()
.withServiceEndpoint("https://XXX.documents.azure.com:443/")
.withMasterKeyOrResourceToken("XXXX")
.withConnectionPolicy(ConnectionPolicy.GetDefault())
.withConsistencyLevel(ConsistencyLevel.Eventual)
.build();
System.out.println(client);
}
}
pom.xml
<dependencies>
<dependency>
<groupId>com.microsoft.azure</groupId>
<artifactId>azure-cosmosdb</artifactId>
<version>2.6.4</version>
</dependency>
<dependency>
<groupId>com.microsoft.azure</groupId>
<artifactId>azure-cosmosdb-commons</artifactId>
<version>2.6.4</version>
</dependency>
</dependencies>
Related
I cannot see the messages in the SQS queue being consumed by the #SqsListener
import org.springframework.cloud.aws.messaging.listener.annotation.SqsListener; //others
#Component
public class Consumer{
private static final Logger logger = LoggerFactory.getLogger(Consumer.class);
#SqsListener(value = "TEST-MY-QUEUE")
public void receiveMessage(String stringJson) {
System.out.println("***Consuming message: " + stringJson);
logger.info("Consuming message: " + stringJson);
}
}
My configuration (Here I print the client queues, and I can deffo spot the queue I want to consume - TEST-MY-QUEUE . It prints the URL correctly in the region. I am also able to see the region loaded correctly (same as queue) in regionProvider
#Configuration
public class AwsConfiguration {
#Bean
#Primary
AmazonSQSAsync sqsClient() {
AmazonSQSAsync amazonSQSAsync = AmazonSQSAsyncClientBuilder.defaultClient();
System.out.println("Client queues = " + amazonSQSAsync.listQueues()); //The queue I want to consume is here
return amazonSQSAsync;
}
#Bean
AwsRegionProvider regionProvider() {
DefaultAwsRegionProviderChain defaultAwsRegionProviderChain = new DefaultAwsRegionProviderChain();
System.out.println("Region = " + defaultAwsRegionProviderChain.getRegion());
return defaultAwsRegionProviderChain;
}
#Bean
public SimpleMessageListenerContainer simpleMessageListenerContainer(AmazonSQSAsync amazonSQSAsync, QueueMessageHandler queueMessageHandler) {
SimpleMessageListenerContainer simpleMessageListenerContainer = new SimpleMessageListenerContainer();
simpleMessageListenerContainer.setAmazonSqs(amazonSQSAsync);
simpleMessageListenerContainer.setMessageHandler(queueMessageHandler);
simpleMessageListenerContainer.setMaxNumberOfMessages(10);
simpleMessageListenerContainer.setTaskExecutor(threadPoolTaskExecutor());
return simpleMessageListenerContainer;
}
#Bean
public QueueMessageHandler queueMessageHandler(AmazonSQSAsync amazonSQSAsync) {
QueueMessageHandlerFactory queueMessageHandlerFactory = new QueueMessageHandlerFactory();
queueMessageHandlerFactory.setAmazonSqs(amazonSQSAsync);
QueueMessageHandler queueMessageHandler = queueMessageHandlerFactory.createQueueMessageHandler();
return queueMessageHandler;
}
public ThreadPoolTaskExecutor threadPoolTaskExecutor() {
ThreadPoolTaskExecutor executor = new ThreadPoolTaskExecutor();
executor.setCorePoolSize(10);
executor.setMaxPoolSize(10);
executor.initialize();
return executor;
}
And pom.xml (Java 11, spring boot, spring cloud aws)
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
<version>2.5.6</version>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-aws-core</artifactId>
<version>2.2.6.RELEASE</version>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-aws-autoconfigure</artifactId>
<version>2.2.6.RELEASE</version>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-bootstrap</artifactId>
<version>3.0.3</version>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-aws</artifactId>
<version>2.2.6.RELEASE</version>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-aws-messaging</artifactId>
<version>2.2.6.RELEASE</version>
</dependency>
I noticed very similar issues in the questions here and I changed my dependencies in pom.xml to be spring-cloud-starter-aws-messaging but didnt fix for me. I double checked the names (queue, annotation) all seems fine
When I run my app, starts fine but I dont see any logs or exception. Not one message consumed.
What am I missing?
Thank you
You are using a third party API. To use invoke Amazon Simple Queue Service (SQS) from a Java project, use the Official AWS SDK for Java V2. If you are not aware how to use this SDK, see this DEV Guide:
Developer guide - AWS SDK for Java 2.x
For AWS SQS specific information, see:
Working with Amazon Simple Queue Service
This has links to AWS Github where you will find POM dependencies, code, etc.
At the end it was an issue with the config (using the credentials)
In application.yml
credentials:
useDefaultAwsCredentialsChain: true #Will use credentials in /.aws
And then in the AWSConfig class where you create the AmazonSQSAsync, just make it use that config
public AmazonSQSAsync amazonSQSAsync() {
DefaultAWSCredentialsProviderChain defaultAWSCredentialsProviderChain = new DefaultAWSCredentialsProviderChain();
return AmazonSQSAsyncClientBuilder.standard().withRegion(region)
.withCredentials(defaultAWSCredentialsProviderChain)
.build();
I have created a simple sikuli script within inteliJ however when attempting to execute the script it seems to throw the following exception:
Exception in thread "main" java.lang.ExceptionInInitializerError
Currently I'm using Java: 11 and I'am using the following Maven dependency:
<dependency>
<groupId>com.sikulix</groupId>
<artifactId>sikulixapi</artifactId>
<version>1.1.0</version>
</dependency>
My Script:
public class Test {
public static void main(String[] args) throws FindFailed {
Screen s = new Screen();
//Click on settingimage
Pattern setting = new Pattern("image1.png");
s.wait(setting, 2000);
s.click();
}
i have a problem with Maven WebApp with following dependences :
` <dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-core_2.10</artifactId>
<version>2.0.1</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-sql_2.10</artifactId>
<version>2.0.1</version>
</dependency>
<dependency>
<groupId>org.mongodb</groupId>
<artifactId>mongo-java-driver</artifactId>
<version>3.4.0-rc1</version>
</dependency>
<dependency>
<groupId>org.mongodb.spark</groupId>
<artifactId>mongo-spark-connector_2.10</artifactId>
<version>2.0.0-rc1</version>
</dependency>`
In a EJB (#stateless with local interface, called from a servlet) i have :
public void callSpark(QParams qp){
String uri = "mongodb://localhost:27017/admin.EventCollection";
try {
SparkConf conf = new SparkConf()
.setMaster("local")
.setAppName("BigDataEvent")
.set("spark.mongodb.input.partitioner",
"MongoPaginateBySizePartitioner")
.set("spark.app.id", "BigDataAnalysis")
.set("spark.mongodb.input.uri", uri)
.set("spark.mongodb.output.uri", uri);
JavaSparkContext jsc = new JavaSparkContext(conf);
SparkSession sparkSession = SparkSession.builder().getOrCreate();
SQLContext sqlc = new org.apache.spark.sql.SQLContext(jsc);
Dataset<Row> df = MongoSpark.load(jsc).toDF();
df.createOrReplaceTempView("EventCollection");
postCodeQuery(df,qp); //my method
eventTypeQuery(sparkSession, eventType); //my method
} catch (Exception ex){Logger.getLogger(SparkContext.class.getName()).log(Level.SEVERE, null, ex);
}
This code works very well in a Java Client Maven application but in this situation (Maven Web APP) the application crashes (without exception and error message) when I call :
JavaSparkContext jsc = new JavaSparkContext(conf);
The application make build and deploy (on GlassFish 4.1.1) without errors or exception.
Anyone Had Similar Experiences with this configuration?
Thanks...
I'm running spark job from EMR cluster which connects to Cassandra on EC2
The following are the dependencies which I'm using for my project.
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-core_2.10</artifactId>
<version>1.6.0</version>
</dependency>
<dependency>
<groupId>com.datastax.spark</groupId>
<artifactId>spark-cassandra-connector_2.10</artifactId>
<version>1.5.0-M1</version>
</dependency>
<dependency>
<groupId>com.datastax.cassandra</groupId>
<artifactId>cassandra-driver-core</artifactId>
<version>2.1.6</version>
</dependency>
<dependency>
<groupId>com.datastax.spark</groupId>
<artifactId>spark-cassandra-connector-java_2.10</artifactId>
<version>1.5.0-M3</version>
</dependency>
The issue that Im facing here is if I use the cassandra-driver-core 3.0.0 , I get the following error
java.lang.ExceptionInInitializerError
at mobi.vserv.SparkAutomation.DriverTester.doTest(DriverTester.java:28)
at mobi.vserv.SparkAutomation.DriverTester.main(DriverTester.java:16)
Caused by: java.lang.IllegalStateException: Detected Guava issue #1635 which indicates that a version of Guava less than 16.01 is in use. This introduces codec resolution issues and potentially other incompatibility issues in the driver. Please upgrade to Guava 16.01 or later.
at com.datastax.driver.core.SanityChecks.checkGuava(SanityChecks.java:62)
at com.datastax.driver.core.SanityChecks.check(SanityChecks.java:36)
at com.datastax.driver.core.Cluster.<clinit>(Cluster.java:67)
... 2 more
I have tried including the guaua version 19.0.0 also but still I'm unable to run the job
and when I degrate the cassandra-driver-core 2.1.6 I get the following error.
com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: /EMR PUBLIC IP:9042 (com.datastax.driver.core.TransportException: [/EMR PUBLIC IP:9042] Cannot connect))
at com.datastax.driver.core.ControlConnection.reconnectInternal(ControlConnection.java:223)
at com.datastax.driver.core.ControlConnection.connect(ControlConnection.java:78)
at com.datastax.driver.core.Cluster$Manager.init(Cluster.java:1272)
at com.datastax.driver.core.Cluster.init(Cluster.java:158)
at com.datastax.driver.core.Cluster.connect(Cluster.java:248)
Please note that I have tested my code locally and it runs absolutely fine and I have followed the different combinations of dependencies as mentioned here https://github.com/datastax/spark-cassandra-connector
Code :
public class App1 {
private static Logger logger = LoggerFactory.getLogger(App1.class);
static SparkConf conf = new SparkConf().setAppName("SparkAutomation").setMaster("yarn-cluster");
static JavaSparkContext sc = null;
static
{
sc = new JavaSparkContext(conf);
}
public static void main(String[] args) throws Exception {
JavaRDD<String> Data = sc.textFile("S3 PATH TO GZ FILE/*.gz");
JavaRDD<UserSetGet> usgRDD1=Data.map(new ConverLineToUSerProfile());
List<UserSetGet> t3 = usgRDD1.collect();
for(int i =0 ; i <=t3.size();i++){
try{
phpcallone php = new phpcallone();
php.sendRequest(t3.get(i));
}
catch(Exception e){
logger.error("This Has reached ====> " + e);
}
}
}
}
public class phpcallone{
private static Logger logger = LoggerFactory.getLogger(phpcallone.class);
static String pid;
public void sendRequest(UserSetGet usg) throws JSONException, IOException, InterruptedException {
UpdateCassandra uc= new UpdateCassandra();
try {
uc.UpdateCsrd();
}
catch (ClassNotFoundException e) {
e.printStackTrace(); }
}
}
}
public class UpdateCassandra{
public void UpdateCsrd() throws ClassNotFoundException {
Cluster.Builder clusterBuilder = Cluster.builder()
.addContactPoint("PUBLIC IP ").withPort(9042)
.withCredentials("username", "password");
clusterBuilder.getConfiguration().getSocketOptions().setConnectTimeoutMillis(10000);
try {
Session session = clusterBuilder.build().connect("dmp");
session.execute("USE dmp");
System.out.println("Connection established");
} catch (Exception e) {
e.printStackTrace();
}
}
}
Assuming that you are using EMR 4.1+, you can pass in the guava jar into the --jars option for spark submit. Then supply a configuration file to EMR to use user class paths first.
For example, in a file setup.json
[
{
"Classification": "spark-defaults",
"Properties": {
"spark.driver.userClassPathFirst": "true",
"spark.executor.userClassPathFirst": "true"
}
}
]
You would supply the --configurations file://setup.json option into the create-cluster aws cli command.
I'm working on a project with GWT and J2SE clients. The GWT part is working great, but now there are problems with the J2SE client;
"The server understands the content type of the request entity and the
syntax of the request entity is correct but was unable to process the
contained instructions"
"The serialized representation must have this media type:
application/x-java-serialized-object or this one:
application/x-java-serialized-object+xml"
This code was working some months/versions ago... Both PUT and POST produce this error while GET is working. Whats wrong here?
Here's a really simple test case
// Shared Interface
public interface J2SeClientServerResourceInt
{
#Post("json")
public J2seStatusDto postJ2seStatus(J2seStatusDto pJ2seStatusDto);
}
// Java Bean
public class J2seStatusDto implements Serializable
{
private static final long serialVersionUID = 6901448809350740172L;
private String mTest;
public J2seStatusDto()
{
}
public J2seStatusDto(String pTest)
{
setTest(pTest);
}
public String getTest()
{
return mTest;
}
public void setTest(String pTest)
{
mTest = pTest;
}
}
// Server
public class J2seServerResource extends ClaireServerResource implements J2SeServerResourceInt
{
#Override
public J2seStatusDto postJ2seStatusDto(J2seStatusDto pJ2seStatusDto)
{
return pJ2seStatusDto;
}
}
// J2SE Client
public class ClaireJsSeTestClient
{
public static void main(String[] args)
{
Reference lReference = new Reference("http://localhost:8888//rest/j2se");
ClientResource lClientResource = new ClientResource(lReference);
lClientResource.accept(MediaType.APPLICATION_JSON);
J2SeServerResourceInt lJ2SeServerResource = lClientResource.wrap(J2SeServerResourceInt.class);
J2seStatusDto lJ2seStatusDto = new J2seStatusDto("TEST");
J2seStatusDto lJ2seResultDto = lJ2SeServerResource.postJ2seStatusDto(lJ2seStatusDto);
}
}
// Maven J2Se Client
<dependencies>
<dependency>
<groupId>org.restlet.jse</groupId>
<artifactId>org.restlet</artifactId>
<version>3.0-M1</version>
</dependency>
<dependency>
<groupId>org.restlet.jse</groupId>
<artifactId>org.restlet.ext.jackson</artifactId>
<version>3.0-M1</version>
</dependency>
</dependencies>
// Maven GAE Server
<dependency>
<groupId>org.restlet.gae</groupId>
<artifactId>org.restlet</artifactId>
<version>3.0-M1</version>
</dependency>
<dependency>
<groupId>org.restlet.gae</groupId>
<artifactId>org.restlet.ext.servlet</artifactId>
<version>3.0-M1</version>
</dependency>
<dependency>
<groupId>org.restlet.gae</groupId>
<artifactId>org.restlet.ext.jackson</artifactId>
<version>3.0-M1</version>
</dependency>
<dependency>
<groupId>org.restlet.gae</groupId>
<artifactId>org.restlet.ext.gwt</artifactId>
<version>3.0-M1</version>
</dependency>
<dependency>
<groupId>org.restlet.gwt</groupId>
<artifactId>org.restlet</artifactId>
<version>3.0-M1</version>
<scope>compile</scope>
</dependency>
Thierry Boileau fixed our problem (mistake);
https://github.com/restlet/restlet-framework-java/issues/1029#issuecomment-76212062
Due to the constraints of the GAE platform (there is no support of chunked encoding) you have to specify that the request entity is buffered first;
cr.setRequestEntityBuffering(true);
Thanks for the great support at restlet.com