I have an issue of running Java integration tests with LocalstackTestRunner on Gitlab agent.
I've taken example from official localstack site:
import cloud.localstack.LocalstackTestRunner;
import cloud.localstack.TestUtils;
import cloud.localstack.docker.annotation.LocalstackDockerProperties;
#RunWith(LocalstackTestRunner.class)
#LocalstackDockerProperties(services = { "s3", "sqs", "kinesis:77077" })
public class MyCloudAppTest {
#Test
public void testLocalS3API() {
AmazonS3 s3 = TestUtils.getClientS3();
List<Bucket> buckets = s3.listBuckets();
}
}
and run it with help of gradle as gradle clean test.
If I run it locally on my Mac Book - all is ok but if it's run on Gitlab agent - there is an issue:
com.amazonaws.SdkClientException: Unable to execute HTTP request: Connect to localhost.localstack.cloud:4566 [localhost.localstack.cloud/127.0.0.1] failed: Connection refused (Connection refused)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleRetryableException(AmazonHttpClient.java:1207)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1153)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:802)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:770)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:744)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:704)
at com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:686)
My gitlab ci job looks likes as follows:
Localstack_test:
stage: test
services:
- docker:dind
when: always
script:
- ./gradlew clean test --stacktrace
It happens that S3 client can't connect to localhost.localstack.cloud:4566 because docker container created by LocalstackTestRunner is started within parent docker:dind container and AmazonS3 client can't access it. I've tried with other AWS services - result is the same - AWS client can't access localstack endpoint.
I've found some workaround as follows:
add localstack as service in gitlab-ci
add alias to it
expose env variable HOSTNAME_EXTERNAL=alias
Make an implementation of IHostNameResolver to return my alias as HOSTNAME_EXTERNAL specified in gitlab-ci.
Something like that:
Gitlab-ci:
Localstack_test:
stage: test
services:
- docker:dind
- name: localstack/localstack
alias: localstack-it
variables:
HOSTNAME_EXTERNAL: "localstack-it"
when: always
script:
- ./gradlew clean test --stacktrace |& tee -a ./gradle.log
Java IT test:
#RunWith(LocalstackTestRunner.class)
#LocalstackDockerProperties(
services = { "s3", "sqs", "kinesis:77077" },
hostNameResolver = SystemEnvHostNameResolver.class
)
public class MyCloudAppTest {
#Test
public void testLocalS3API() {
AmazonS3 s3 = TestUtils.getClientS3();
List<Bucket> buckets = s3.listBuckets();
}
}
public class SystemEnvHostNameResolver implements IHostNameResolver {
private static final String HOSTNAME_EXTERNAL = "HOSTNAME_EXTERNAL";
#Override
public String getHostName() {
String external = System.getenv(HOSTNAME_EXTERNAL);
return !Strings.isNullOrEmpty(external) ?
external :
new LocalHostNameResolver().getHostName();
}
}
It works but as a result 2 localstack Docker containers are run and internal docker container is still not available. Maybe does somebody know better solution?
STR:
gradle-6.7
cloud.localstack:localstack-utils:0.2.5
Related
I'm trying to build cdktf - Java Infrastructure as Code for Azure.
On the HdInsight Kafka Cluster i'm not able to produce the correct cdktf classes.
I'm always running in:
Caused by: software.amazon.jsii.JsiiException: No stack could be identified for the construct at path 'dev-name-cluster'
Error: No stack could be identified for the construct at path 'dev-name-cluster'
My Code looks like:
public class KafkaAnnonymous extends TerraformStack {
public KafkaAnnonymous(Construct scope, String id, final Environment environment,`your text`
ResourceGroup rg, String storageAccount, Network network, KeyVault kv) {
super(scope, id);
AzurermProvider.Builder.create(this, "azureProvider")
.features(AzurermProviderFeatures.builder()
.build())
.subscriptionId("<SUBSCRIPTION_ID>")
.build();
AzurermBackend.Builder.create(this)
.resourceGroupName("g_rg")
.storageAccountName("g_storage_account_name")
.containerName("terraform")
.key(String.format("%s.terraform.tfstate", id))
.build();
create(scope, id, environment, rg, storageAccount, network, kv);
}
private final void create(Construct scope, String id, final Environment environment,
ResourceGroup rg, String storageAccount, Network network, KeyVault kv) {
HdinsightKafkaCluster.Builder
.create(scope, id)
.clusterVersion("2.4.1")
.resourceGroupName(rg.getResourceGroup().getName())
.name(id)
.location("westeurope")
.tier("Standard")
.componentVersion(componentVersion())
.gateway(gateway(password))
.storageAccount(storage(storageAccount))
.roles(roles(network, password))
.build();
}
...
}
}
Where is my fault?
Are you initialising this stack somewhere? e.g. in your Main.java? This part seems to be missing: https://github.com/hashicorp/terraform-cdk/blob/main/examples/java/aws/src/main/java/com/mycompany/app/Main.java#L50
I'm using open telemetry to export trace information of the following application:
A nodejs kafka producer sends messages to input-topic. It uses kafkajs instrumented with opentelemetry-instrumentation-kafkajs library. I'm using the example from AWS OTEL for NodeJS example. Here is my tracer.js:
module.exports = () => {
diag.setLogger(new DiagConsoleLogger(), DiagLogLevel.ERROR);
// create a provider for activating and tracking with AWS IdGenerator
const attributes = {
'service.name': 'nodejs-producer',
'service.namespace': 'axel'
}
let resource = new Resource(attributes)
const tracerConfig = {
idGenerator: new AWSXRayIdGenerator(),
plugins: {
kafkajs: { enabled: false, path: 'opentelemetry-plugin-kafkajs' }
},
resource: resource
};
const tracerProvider = new NodeTracerProvider(tracerConfig);
// add OTLP exporter
const otlpExporter = new CollectorTraceExporter({
url: (process.env.OTEL_EXPORTER_OTLP_ENDPOINT) ? process.env.OTEL_EXPORTER_OTLP_ENDPOINT : "localhost:55680"
});
tracerProvider.addSpanProcessor(new SimpleSpanProcessor(otlpExporter));
tracerProvider.addSpanProcessor(new SimpleSpanProcessor(new ConsoleSpanExporter()));
// Register the tracer with X-Ray propagator
tracerProvider.register({
propagator: new AWSXRayPropagator()
});
registerInstrumentations({
tracerProvider,
instrumentations: [new KafkaJsInstrumentation({})],
});
// Return a tracer instance
return trace.getTracer("awsxray-tests");
}
A Java application that reads from input-topic and produces to final-topic. Also instrumented with AWS OTEL java agent. Java app is launched like below:
export OTEL_TRACES_EXPORTER=otlp
export OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4317
export OTEL_PROPAGATORS=xray
export OTEL_RESOURCE_ATTRIBUTES="service.name=otlp-consumer-producer,service.namespace=axel"
export OTEL_METRICS_EXPORTER=none
java -javaagent:"${PWD}/aws-opentelemetry-agent.jar" -jar "${PWD}/target/otlp-consumer-producer.jar"
I'm using otel/opentelemetry-collector-contrib that has AWS XRay exporter:
receivers:
otlp:
protocols:
grpc:
endpoint: 0.0.0.0:4317
exporters:
awsxray:
region: 'eu-central-1'
max_retries: 10
service:
pipelines:
traces:
receivers: [otlp]
exporters: [awsxray]
I can see from log messages and also from XRay console that traces are being published (with correct parent trace ids). NodeJS log message:
{
traceId: '60d0c7d4cfc2d86b2df8624cb4bccead',
parentId: undefined,
name: 'input-topic',
id: '3e289f00c4499ae8',
kind: 3,
timestamp: 1624295380734468,
duration: 3787,
attributes: {
'messaging.system': 'kafka',
'messaging.destination': 'input-topic',
'messaging.destination_kind': 'topic'
},
status: { code: 0 },
events: []
}
and Java consumer with headers:
Headers([x-amzn-trace-id:Root=1-60d0c7d4-cfc2d86b2df8624cb4bccead;Parent=3e289f00c4499ae8;Sampled=1])
As you see parent and root ids match each other. However the service map is constructed in a disconnected way:
What other configuration I'm missing here to compile a correct service map?
I am using java-grpc together with tikv-java (separately they work OK). But together I am struggling with the following error:
Exception in thread "main" java.lang.NoSuchFieldError: CONTEXT_SPAN_KEY
at io.grpc.internal.CensusTracingModule$TracingClientInterceptor.interceptCall(CensusTracingModule.java:327)
at io.grpc.ClientInterceptors$InterceptorChannel.newCall(ClientInterceptors.java:104)
at io.grpc.internal.ManagedChannelImpl.newCall(ManagedChannelImpl.java:551)
at io.grpc.stub.ClientCalls.blockingUnaryCall(ClientCalls.java:113)
at com.pv.app.GetInsertServiceGrpc$GetInsertServiceBlockingStub.insert(GetInsertServiceGrpc.java:195)
at com.pv.app.Client.main(Client.java:55)
My code-client:
package com.pv.app;
import io.grpc.*;
public class Client {
public static void main(String[] args) throws Exception {
// Channel is the abstraction to connect to a service endpoint
// Let's use plaintext communication because we don't have certs
final ManagedChannel channel =
ManagedChannelBuilder.forTarget("0.0.0.0:8080").usePlaintext().build();
GetInsertServiceGrpc.GetInsertServiceBlockingStub stub =
GetInsertServiceGrpc.newBlockingStub(channel);
GetInsertServiceOuterClass.HelloMessage request =
GetInsertServiceOuterClass.HelloMessage.newBuilder().setName("hello").build();
System.out.println(request);
System.out.println("b4 req");
// Finally, make the call using the stub
stub.insert(request);
channel.shutdownNow();
}
}
My code-server:
package com.pv.app;
import io.grpc.Server;
import io.grpc.ServerBuilder;
/** Hello world! */
public class App {
public static void main(String[] args) throws Exception {
System.out.println("Hello-start");
Server server = ServerBuilder.forPort(8080).addService(new GetInsertServiceImpl()).build();
// Start the server
server.start();
// Server threads are running in the background.
System.out.println("Server started");
// Don't exit the main thread. Wait until server is terminated.
server.awaitTermination();
}
}
My code-implementation:
package com.pv.app;
import org.tikv.common.TiConfiguration;
import org.tikv.common.TiSession;
import org.tikv.raw.RawKVClient;
public class GetInsertServiceImpl
extends GetInsertServiceGrpc.GetInsertServiceImplBase {
#Override
public void insert(
GetInsertServiceOuterClass.HelloMessage request,
io.grpc.stub.StreamObserver<com.google.protobuf.Empty> responseObserver) {
// HelloRequest has toString auto-generated.
System.out.println("insert");
System.out.println(request);
TiConfiguration conf = TiConfiguration.createRawDefault("pd0:2379");
System.out.println(1);
System.out.println("2");
System.out.println(conf);
TiSession session = TiSession.create(conf);
System.out.println("3");
RawKVClient client = session.createRawClient();
System.out.println("4");
// When you are done, you must call onCompleted.
responseObserver.onCompleted();
}
}
My proto:
syntax = "proto3";
import "google/protobuf/empty.proto";
option java_package = "com.pv.app";
// Request payload
message HelloMessage {
string name = 1;
}
// Defining a Service, a Service can have multiple RPC operations
service GetInsertService {
// Define a RPC operation
rpc insert (HelloMessage) returns (google.protobuf.Empty) {
};
}
What I do to deploy:
In downloaded repo client-java I do mvn clean install -Dmaven.test.skip=true
In my project folder
mvn install:install-file \
-Dfile=../client-java/target/tikv-client-java-2.0-SNAPSHOT.jar \
-DgroupId=org.tikv \
-DartifactId=tikv-client-java \
-Dversion=2.0-SNAPSHOT \
-Dpackaging=jar
In my project pom.xml
<dependency>
<groupId>org.tikv</groupId>
<artifactId>tikv-client-java</artifactId>
<version>2.0-SNAPSHOT</version>
</dependency>
Running java 8:
mvn -DskipTests package exec:java -Dexec.mainClass=com.pv.app.App
mvn -DskipTests package exec:java -Dexec.mainClass=com.pv.app.Client
Does anyone have a suggestion how to fix?
Full code is available here
I did my searching, tried to exclude grpc and opencensus, switch versions - did not help.
The problem is caused by conflicting io.opencensus versions. I was able to fix it by shading it in the tikv/client-java project.
In the tikv/client-java, pom.xml, maven-shade-plugin configuration:
<relocations>
...
<relocation>
<pattern>io.opencensus</pattern>
<shadedPattern>shade.io.opencensus</shadedPattern>
</relocation>
<relocations>
UPDATE
I have just realized that there were changes to the pom.xml merged to the master yesterday, so you may want to update it if you haven't yet.
UPDATE 2
I have just checked your project with the recent version of the tikv/client-java. The NoSuchFieldError: CONTEXT_SPAN_KEY is gone. There are other errors (java.net.UnknownHostException) but they are rather not related.
I have setup a HyperLedger Fabric V1.0 network with 4 organisations each having 1 peer by following the steps Building Your First Network.
Now I have
org1.example.com - with peer: peer0.org1.example.com and msp: Org1MSP
org2.example.com - with peer: peer0.org2.example.com and msp: Org2MSP
org3.example.com - with peer: peer0.org3.example.com and msp: Org3MSP
org4.example.com - with peer: peer0.org4.example.com and msp: Org4MSP
And now I can install the chaincode to peers and instantiate the chaincode on the channel. I can also able to invoke and query chain code by using the commands mentioned here like
Invoke: peer chaincode invoke -o orderer.example.com:7050 --tls $CORE_PEER_TLS_ENABLED --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem
-C $CHANNEL_NAME -n mycc -c '{"Args":["invoke","a","b","10"]}'
Query: peer chaincode query -C $CHANNEL_NAME -n mycc -c '{"Args":["query","a"]}'
I was previously using Hyperledger Fabric V0.6 service provided by IBM Bluemix and my java applications were invoking the chain code through the Rest API.
How can I use the Rest API here in this local network setup using docker image?, then my java applications can interact with my chaincode.
Since I am not so familiar with this local network setup, please suggest me how can I make it work.
Note:
I am using Windows 7 machine and network is setup by running
the commands in docker quick start terminal
Thanks in advance..
There is no REST API in Hyperledger Fabric v.1.0.0, however there is Java SDK which could be used to interact with peers. You can setup your java project with following maven dependencies:
<dependency>
<groupId>org.hyperledger.fabric-sdk-java</groupId>
<artifactId>fabric-sdk-java</artifactId>
<version>1.0.0</version>
</dependency>
Now you can use SDK APIs to invoke/query your chaincodes:
Get instance of HF client
final HFClient client = HFClient.createNewInstance();
Setup crypto materials for client
// Set default crypto suite for HF client
client.setCryptoSuite(CryptoSuite.Factory.getCryptoSuite());
client.setUserContext(new User() {
public String getName() {
return "testUser";
}
public Set<String> getRoles() {
return null;
}
public String getAccount() {
return null;
}
public String getAffiliation() {
return null;
}
public Enrollment getEnrollment() {
return new Enrollment() {
public PrivateKey getKey() {
// Load your private key
}
public String getCert() {
// Read client certificate
}
};
}
public String getMspId() {
return "Org1MSP";
}
});
Now channel configuration:
final Channel channel = client.newChannel("mychannel");
channel.addOrderer(client.newOrderer("orderer0", "grpc://localhost:7050"));
channel.addPeer(client.newPeer("peer0", "grpc://localhost:7051"));
channel.initialize();
Create transaction proposal:
final TransactionProposalRequest proposalRequest = client.newTransactionProposalRequest();
final ChaincodeID chaincodeID = ChaincodeID.newBuilder()
.setName("myCC")
.setVersion("1.0")
.setPath("github.com/yourpackage/chaincode/")
.build();
proposalRequest.setChaincodeID(chaincodeID);
proposalRequest.setFcn("fcn");
proposalRequest.setProposalWaitTime(TimeUnit.SECONDS.toMillis(10));
proposalRequest.setArgs(new String[]{"arg1", "arg2"});
Send proposal
final Collection<ProposalResponse> responses = channel.sendTransactionProposal(proposalRequest);
CompletableFuture<BlockEvent.TransactionEvent> txFuture = channel.sendTransaction(responses, client.getUserContext());
BlockEvent.TransactionEvent event = txFuture.get();
System.out.println(event.toString());
I have a strange behaviour of my little akka cluster projekt:
I have a very simple application.conf:
akka {
# specifiy logger
loggers = ["akka.event.slf4j.Slf4jLogger"]
loglevel = "DEBUG"
stdout-loglevel = "DEBUG"
# configure remote connection
actor {
provider = "akka.cluster.ClusterActorRefProvider"
}
remote {
enabled-transport = ["akka.remote.netty.tcp"]
netty.tcp {
hostname = "127.0.0.1"
port = 3100
}
}
cluster {
seed-nodes = [
"akka.tcp://mycluster#127.0.0.1:3100"
]
}
}
And a very simple main program:
public class MediatorNodeStartup {
public static void main(String[] args) {
String port = System.getProperty("config.port") == null ? "3100" : System.getProperty("config.port");
Config config = ConfigFactory.parseString("akka.remote.netty.tcp.port=" + port)
.withFallback(ConfigFactory.load());
ActorSystem system = ActorSystem.create("mycluster", config);
}
}
Akka, Akka-Remote and Akka-Cluster are all included via maven and visible in the class path.
Now when I execute this, it just fails with a ClassNotFoundException: akka.cluster.ClusterActorRefProvider
although the akka.cluster.* package definitely is in my classpath.
Strangely enough, on another machine this code just works.
So I suppose it has something to do with my eclipse or runtime configuration... but sadly I have no idea where to start searching for the error.
Any ideas? I will provide further information if necessary.