AWS XRay service map components are disconnected - java

I'm using open telemetry to export trace information of the following application:
A nodejs kafka producer sends messages to input-topic. It uses kafkajs instrumented with opentelemetry-instrumentation-kafkajs library. I'm using the example from AWS OTEL for NodeJS example. Here is my tracer.js:
module.exports = () => {
diag.setLogger(new DiagConsoleLogger(), DiagLogLevel.ERROR);
// create a provider for activating and tracking with AWS IdGenerator
const attributes = {
'service.name': 'nodejs-producer',
'service.namespace': 'axel'
}
let resource = new Resource(attributes)
const tracerConfig = {
idGenerator: new AWSXRayIdGenerator(),
plugins: {
kafkajs: { enabled: false, path: 'opentelemetry-plugin-kafkajs' }
},
resource: resource
};
const tracerProvider = new NodeTracerProvider(tracerConfig);
// add OTLP exporter
const otlpExporter = new CollectorTraceExporter({
url: (process.env.OTEL_EXPORTER_OTLP_ENDPOINT) ? process.env.OTEL_EXPORTER_OTLP_ENDPOINT : "localhost:55680"
});
tracerProvider.addSpanProcessor(new SimpleSpanProcessor(otlpExporter));
tracerProvider.addSpanProcessor(new SimpleSpanProcessor(new ConsoleSpanExporter()));
// Register the tracer with X-Ray propagator
tracerProvider.register({
propagator: new AWSXRayPropagator()
});
registerInstrumentations({
tracerProvider,
instrumentations: [new KafkaJsInstrumentation({})],
});
// Return a tracer instance
return trace.getTracer("awsxray-tests");
}
A Java application that reads from input-topic and produces to final-topic. Also instrumented with AWS OTEL java agent. Java app is launched like below:
export OTEL_TRACES_EXPORTER=otlp
export OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4317
export OTEL_PROPAGATORS=xray
export OTEL_RESOURCE_ATTRIBUTES="service.name=otlp-consumer-producer,service.namespace=axel"
export OTEL_METRICS_EXPORTER=none
java -javaagent:"${PWD}/aws-opentelemetry-agent.jar" -jar "${PWD}/target/otlp-consumer-producer.jar"
I'm using otel/opentelemetry-collector-contrib that has AWS XRay exporter:
receivers:
otlp:
protocols:
grpc:
endpoint: 0.0.0.0:4317
exporters:
awsxray:
region: 'eu-central-1'
max_retries: 10
service:
pipelines:
traces:
receivers: [otlp]
exporters: [awsxray]
I can see from log messages and also from XRay console that traces are being published (with correct parent trace ids). NodeJS log message:
{
traceId: '60d0c7d4cfc2d86b2df8624cb4bccead',
parentId: undefined,
name: 'input-topic',
id: '3e289f00c4499ae8',
kind: 3,
timestamp: 1624295380734468,
duration: 3787,
attributes: {
'messaging.system': 'kafka',
'messaging.destination': 'input-topic',
'messaging.destination_kind': 'topic'
},
status: { code: 0 },
events: []
}
and Java consumer with headers:
Headers([x-amzn-trace-id:Root=1-60d0c7d4-cfc2d86b2df8624cb4bccead;Parent=3e289f00c4499ae8;Sampled=1])
As you see parent and root ids match each other. However the service map is constructed in a disconnected way:
What other configuration I'm missing here to compile a correct service map?

Related

Can't start localstack on Gitlab runner with LocalstackTestRunner

I have an issue of running Java integration tests with LocalstackTestRunner on Gitlab agent.
I've taken example from official localstack site:
import cloud.localstack.LocalstackTestRunner;
import cloud.localstack.TestUtils;
import cloud.localstack.docker.annotation.LocalstackDockerProperties;
#RunWith(LocalstackTestRunner.class)
#LocalstackDockerProperties(services = { "s3", "sqs", "kinesis:77077" })
public class MyCloudAppTest {
#Test
public void testLocalS3API() {
AmazonS3 s3 = TestUtils.getClientS3();
List<Bucket> buckets = s3.listBuckets();
}
}
and run it with help of gradle as gradle clean test.
If I run it locally on my Mac Book - all is ok but if it's run on Gitlab agent - there is an issue:
com.amazonaws.SdkClientException: Unable to execute HTTP request: Connect to localhost.localstack.cloud:4566 [localhost.localstack.cloud/127.0.0.1] failed: Connection refused (Connection refused)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleRetryableException(AmazonHttpClient.java:1207)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1153)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:802)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:770)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:744)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:704)
at com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:686)
My gitlab ci job looks likes as follows:
Localstack_test:
stage: test
services:
- docker:dind
when: always
script:
- ./gradlew clean test --stacktrace
It happens that S3 client can't connect to localhost.localstack.cloud:4566 because docker container created by LocalstackTestRunner is started within parent docker:dind container and AmazonS3 client can't access it. I've tried with other AWS services - result is the same - AWS client can't access localstack endpoint.
I've found some workaround as follows:
add localstack as service in gitlab-ci
add alias to it
expose env variable HOSTNAME_EXTERNAL=alias
Make an implementation of IHostNameResolver to return my alias as HOSTNAME_EXTERNAL specified in gitlab-ci.
Something like that:
Gitlab-ci:
Localstack_test:
stage: test
services:
- docker:dind
- name: localstack/localstack
alias: localstack-it
variables:
HOSTNAME_EXTERNAL: "localstack-it"
when: always
script:
- ./gradlew clean test --stacktrace |& tee -a ./gradle.log
Java IT test:
#RunWith(LocalstackTestRunner.class)
#LocalstackDockerProperties(
services = { "s3", "sqs", "kinesis:77077" },
hostNameResolver = SystemEnvHostNameResolver.class
)
public class MyCloudAppTest {
#Test
public void testLocalS3API() {
AmazonS3 s3 = TestUtils.getClientS3();
List<Bucket> buckets = s3.listBuckets();
}
}
public class SystemEnvHostNameResolver implements IHostNameResolver {
private static final String HOSTNAME_EXTERNAL = "HOSTNAME_EXTERNAL";
#Override
public String getHostName() {
String external = System.getenv(HOSTNAME_EXTERNAL);
return !Strings.isNullOrEmpty(external) ?
external :
new LocalHostNameResolver().getHostName();
}
}
It works but as a result 2 localstack Docker containers are run and internal docker container is still not available. Maybe does somebody know better solution?
STR:
gradle-6.7
cloud.localstack:localstack-utils:0.2.5

ns does not exist, com.mongodb.MongoCommandException

Using mongock to perform the data migration but end up with below error.
Gradle
plugins {
id 'java'
}
group 'fete.bird'
version '0.0.1-SNAPSHOT'
repositories {
mavenCentral()
}
ext {
set('mongockVersion', "4.1.14")
}
dependencies {
testCompile group: 'junit', name: 'junit', version: '4.12'
implementation 'org.springframework.boot:spring-boot-starter-data-mongodb:2.3.1.RELEASE'
implementation "com.github.cloudyrock.mongock:mongock-standalone:${mongockVersion}"
implementation "com.github.cloudyrock.mongock:mongodb-sync-v4-driver:${mongockVersion}"
implementation 'org.mongodb:mongodb-driver-sync:4.1.0'
compile project(':Data')
}
Standalone approach
public class main {
public static void main(String[] args) {
MongockStandalone();
}
private static void MongockStandalone(){
var mongoUri = Configuration.getMongoUri();
MongoClient mongoClient = MongoClients.create("mongodb://127.0.1:27017/");
MongockStandalone.builder()
.setDriver(MongoSync4Driver.withDefaultLock(mongoClient,"FeteBird-Product"))
.addChangeLogsScanPackage("db.migration")
.buildRunner().execute();
}
}
Changelog
#ChangeLog(order = "001")
public class DbChangeLog001 {
#ChangeSet(order = "001", id = "seedProduct", author = "San")
public void seedProduct(MongoDatabase mongoDatabase) {
mongoDatabase.createCollection("Product");
}
}
Error
02:04:52.060 [main] DEBUG org.mongodb.driver.operation - Unable to retry operation listIndexes due to error "com.mongodb.MongoCommandException: Command failed with error 26 (NamespaceNotFound): 'ns does not exist: FeteBird-Product.mongockChangeLog' on server 127.0.1:27017. The full response is {"ok": 0.0, "errmsg": "ns does not exist: FeteBird-Product.mongockChangeLog", "code": 26, "codeName": "NamespaceNotFound"}"
02:04:52.061 [main] DEBUG com.github.cloudyrock.mongock.driver.mongodb.sync.v4.repository.MongoSync4RepositoryBase - Removing residual uniqueKeys for collection [mongockChangeLog]
02:04:52.063 [main] DEBUG org.mongodb.driver.protocol.command - Sending command '{"listIndexes": "mongockChangeLog", "cursor": {}, "$db": "FeteBird-Product", "lsid": {"id": {"$binary": {"base64": "2Ki32rm4RAacZEcl0iUR0A==", "subType": "04"}}}}' with request id 17 to database FeteBird-Product on connection [connectionId{localValue:3, serverValue:20}] to server 127.0.1:27017
02:04:52.069 [main] DEBUG org.mongodb.driver.protocol.command - Execution of command with request id 17 failed to complete successfully in 6.26 ms on connection [connectionId{localValue:3, serverValue:20}] to server 127.0.1:27017
com.mongodb.MongoCommandException: Command failed with error 26 (NamespaceNotFound): 'ns does not exist: FeteBird-Product.mongockChangeLog' on server 127.0.1:27017. The full response is {"ok": 0.0, "errmsg": "ns does not exist: FeteBird-Product.mongockChangeLog", "code": 26, "codeName": "NamespaceNotFound"}
at com.mongodb.internal.connection.ProtocolHelper.getCommandFailureException(ProtocolHelper.java:175)
at com.mongodb.internal.connection.InternalStreamConnection.receiveCommandMessageResponse(InternalStreamConnection.java:359)
at com.mongodb.internal.connection.InternalStreamConnection.sendAndReceive(InternalStreamConnection.java:280)
at com.mongodb.internal.connection.UsageTrackingInternalConnection.sendAndReceive(UsageTrackingInternalConnection.java:100)
at com.mongodb.internal.connection.DefaultConnectionPool$PooledConnection.sendAndReceive(DefaultConnectionPool.java:490)
at com.mongodb.internal.connection.CommandProtocolImpl.execute(CommandProtocolImpl.java:71)
at com.mongodb.internal.connection.DefaultServer$DefaultServerProtocolExecutor.execute(DefaultServer.java:255)
at com.mongodb.internal.connection.DefaultServerConnection.executeProtocol(DefaultServerConnection.java:202)
at com.mongodb.internal.connection.DefaultServerConnection.command(DefaultServerConnection.java:118)
at com.mongodb.internal.connection.DefaultServerConnection.command(DefaultServerConnection.java:110)
at com.mongodb.internal.operation.CommandOperationHelper.executeCommand(CommandOperationHelper.java:345)
at com.mongodb.internal.operation.CommandOperationHelper.executeCommand(CommandOperationHelper.java:336)
at com.mongodb.internal.operation.CommandOperationHelper.executeCommandWithConnection(CommandOperationHelper.java:222)
at com.mongodb.internal.operation.ListIndexesOperation$1.call(ListIndexesOperation.java:171)
at com.mongodb.internal.operation.ListIndexesOperation$1.call(ListIndexesOperation.java:165)
at com.mongodb.internal.operation.OperationHelper.withReadConnectionSource(OperationHelper.java:582)
at com.mongodb.internal.operation.ListIndexesOperation.execute(ListIndexesOperation.java:165)
at com.mongodb.internal.operation.ListIndexesOperation.execute(ListIndexesOperation.java:73)
at com.mongodb.client.internal.MongoClientDelegate$DelegateOperationExecutor.execute(MongoClientDelegate.java:178)
at com.mongodb.client.internal.MongoIterableImpl.execute(MongoIterableImpl.java:135)
at com.mongodb.client.internal.MongoIterableImpl.iterator(MongoIterableImpl.java:92)
at com.mongodb.client.internal.MongoIterableImpl.forEach(MongoIterableImpl.java:121)
at com.github.cloudyrock.mongock.driver.mongodb.sync.v4.repository.MongoSync4RepositoryBase.getResidualKeys(MongoSync4RepositoryBase.java:73)
at com.github.cloudyrock.mongock.driver.mongodb.sync.v4.repository.MongoSync4RepositoryBase.cleanResidualUniqueKeys(MongoSync4RepositoryBase.java:66)
at com.github.cloudyrock.mongock.driver.mongodb.sync.v4.repository.MongoSync4RepositoryBase.ensureIndex(MongoSync4RepositoryBase.java:52)
at com.github.cloudyrock.mongock.driver.mongodb.sync.v4.repository.MongoSync4RepositoryBase.initialize(MongoSync4RepositoryBase.java:39)
at io.changock.driver.core.driver.ConnectionDriverBase.initialize(ConnectionDriverBase.java:43)
at io.changock.runner.core.MigrationExecutor.initializationAndValidation(MigrationExecutor.java:189)
at io.changock.runner.core.MigrationExecutor.executeMigration(MigrationExecutor.java:61)
at io.changock.runner.core.ChangockBase.execute(ChangockBase.java:44)
at main.MongockStandalone(main.java:18)
at main.main(main.java:9)
This is probably because you are using a quite new version of MongoDB. Mongock used that field, but it's not needed anymore.
Since version 4.1.14, this 'ns' field is not used, so you should stop seeing this issue.
You can take a look to Mongock's documentation.

How to parse complex YAML into Java object hierarchy (Spring Boot configuration)

I need to define service connection properties which will be used to configure RestTemplates used to make REST endpoint calls to Spring Boot services.
Note: In the future a service discovery mechanism will likely be involved, but I have been instructed NOT to pursue that avenue for the time being.
Service connection properties:
connection scheme - http/https
host name
host port
connection timeout
request timeout
socket timeout
default keep alive
default max connections per route
max total connections
Possible configuration (YAML):
services-config:
global:
scheme: http|https
connectionTimeout: <timeout ms>
requestTimeout: <timeout ms>
socketTimeout: <timeout ms>
defaultKeepAlive: <timeout ms>
defaultMaxConnPerRoute: <count>
maxTotalConnections: <count>
services:
- id: <ID> [?]
name: <name> [?]
host: localhost|<host>
port: <port>
- id: <ID> [?]
name: <name> [?]
host: localhost|<host>
port: <port>
...
The intent is to place this information into an application.yml file so that it can be accessed within the Spring application.
Is there a way to pull this information in YAML format into a Java object hierarchy like the following:
class : ServicesConfig
global : GlobalConnectionProperties
services : map<String, ServiceConnectionProperties>
class : GlobalConnectionProperties
scheme : String (maybe enum instead)
connectionTimeout : int
requestTimeout : int
socketTimeout : int
defaultKeepAlive : int
defaultMaxConnPerRoute : int
maxTotalConnections : int
class : ServiceConnectionProperties
id : String
name : String
host : String
port : int
UPDATE:
Attempted the following and achieved partial success:
ServiceConfiguration.java
#Configuration
public class ServiceConfiguration {
private ServicesConfig servicesConfig;
#Autowired
public ServiceConfiguration(ServicesConfig servicesConfig) {
this.servicesConfig = servicesConfig;
System.out.println(servicesConfig);
}
}
ServicesConfig.java:
#ConfigurationProperties(value = "services-config")
#EnableConfigurationProperties(
value = {GlobalConnectionProperties.class, ServiceConnectionProperties.class})
public class ServicesConfig {
private GlobalConnectionProperties globalProps;
private Map<String, ServiceConnectionProperties> services;
public GlobalConnectionProperties getGlobalProps() {
return globalProps;
}
public void setGlobalProps(GlobalConnectionProperties globalProps) {
this.globalProps = globalProps;
}
public Map<String, ServiceConnectionProperties> getServices() {
return services;
}
public void setServices(Map<String, ServiceConnectionProperties> services) {
this.services = services;
}
#Override
public String toString() {
return "ServicesConfig [globalProps=" + globalProps + ", services=" + services + "]";
}
}
application.yml:
services-config:
global:
scheme: http
connectionTimeout: 30000
requestTimeout: 30000
socketTimeout: 60000
defaultKeepAlive: 20000
defaultMaxConnPerRoute: 10
maxTotalConnections: 200
services:
- id: foo
name: foo service
host: 192.168.56.101
port: 8090
- id: bar
name: bar service
host: 192.168.56.102
port: 9010
The GlobalConnectionProperties and ServiceConnectionProperties classes have been omitted as they just contain properties, getter/setter methods and toString().
When I run a service, the following appears on the console:
ServicesConfig [globalProps=null, services={0=ServiceConnectionProperties [id=foo, name=foo service, host=192.168.56.101, port=8090], 1=ServiceConnectionProperties [id=bar, name=bar service, host=192.168.56.102, port=9010]}]
I'm actually surprised that the list items were parsed and inserted into the map. I'd like the "id" to be the key instead of a default integer. The odd thing is that the "global" properties have not made it into the map.
What do I need to correct to get the global object items read in?
Is there a way to key the map entries by the item "id" property?
What do I need to correct to get the global object items read in?
Binding of properties follows Java Bean conventions so in ServicesConfig you need to rename the field and getters/setters from globalProps to global.
https://docs.spring.io/spring-boot/docs/current/reference/html/spring-boot-features.html#boot-features-external-config-typesafe-configuration-properties as linked in the comment has more details.
Is there a way to key the map entries by the item "id" property?
For the map entries your yaml syntax is incorrect. See the syntax using : and ? at https://bitbucket.org/asomov/snakeyaml/wiki/Documentation#markdown-header-type-safe-collections.
Your map would be
services:
? foo
: { id: foo,
name: foo service,
host: 192.168.56.101,
port: 8090 }
? bar
: { id: bar,
name: bar service,
host: 192.168.56.102,
port: 9010}

Use REST API support in local development environment for Hyperledger Fabric V1.0

I have setup a HyperLedger Fabric V1.0 network with 4 organisations each having 1 peer by following the steps Building Your First Network.
Now I have
org1.example.com - with peer: peer0.org1.example.com and msp: Org1MSP
org2.example.com - with peer: peer0.org2.example.com and msp: Org2MSP
org3.example.com - with peer: peer0.org3.example.com and msp: Org3MSP
org4.example.com - with peer: peer0.org4.example.com and msp: Org4MSP
And now I can install the chaincode to peers and instantiate the chaincode on the channel. I can also able to invoke and query chain code by using the commands mentioned here like
Invoke: peer chaincode invoke -o orderer.example.com:7050 --tls $CORE_PEER_TLS_ENABLED --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem
-C $CHANNEL_NAME -n mycc -c '{"Args":["invoke","a","b","10"]}'
Query: peer chaincode query -C $CHANNEL_NAME -n mycc -c '{"Args":["query","a"]}'
I was previously using Hyperledger Fabric V0.6 service provided by IBM Bluemix and my java applications were invoking the chain code through the Rest API.
How can I use the Rest API here in this local network setup using docker image?, then my java applications can interact with my chaincode.
Since I am not so familiar with this local network setup, please suggest me how can I make it work.
Note:
I am using Windows 7 machine and network is setup by running
the commands in docker quick start terminal
Thanks in advance..
There is no REST API in Hyperledger Fabric v.1.0.0, however there is Java SDK which could be used to interact with peers. You can setup your java project with following maven dependencies:
<dependency>
<groupId>org.hyperledger.fabric-sdk-java</groupId>
<artifactId>fabric-sdk-java</artifactId>
<version>1.0.0</version>
</dependency>
Now you can use SDK APIs to invoke/query your chaincodes:
Get instance of HF client
final HFClient client = HFClient.createNewInstance();
Setup crypto materials for client
// Set default crypto suite for HF client
client.setCryptoSuite(CryptoSuite.Factory.getCryptoSuite());
client.setUserContext(new User() {
public String getName() {
return "testUser";
}
public Set<String> getRoles() {
return null;
}
public String getAccount() {
return null;
}
public String getAffiliation() {
return null;
}
public Enrollment getEnrollment() {
return new Enrollment() {
public PrivateKey getKey() {
// Load your private key
}
public String getCert() {
// Read client certificate
}
};
}
public String getMspId() {
return "Org1MSP";
}
});
Now channel configuration:
final Channel channel = client.newChannel("mychannel");
channel.addOrderer(client.newOrderer("orderer0", "grpc://localhost:7050"));
channel.addPeer(client.newPeer("peer0", "grpc://localhost:7051"));
channel.initialize();
Create transaction proposal:
final TransactionProposalRequest proposalRequest = client.newTransactionProposalRequest();
final ChaincodeID chaincodeID = ChaincodeID.newBuilder()
.setName("myCC")
.setVersion("1.0")
.setPath("github.com/yourpackage/chaincode/")
.build();
proposalRequest.setChaincodeID(chaincodeID);
proposalRequest.setFcn("fcn");
proposalRequest.setProposalWaitTime(TimeUnit.SECONDS.toMillis(10));
proposalRequest.setArgs(new String[]{"arg1", "arg2"});
Send proposal
final Collection<ProposalResponse> responses = channel.sendTransactionProposal(proposalRequest);
CompletableFuture<BlockEvent.TransactionEvent> txFuture = channel.sendTransaction(responses, client.getUserContext());
BlockEvent.TransactionEvent event = txFuture.get();
System.out.println(event.toString());

Strange Class-Not-Found-Exception with ClusterActorRefProvider

I have a strange behaviour of my little akka cluster projekt:
I have a very simple application.conf:
akka {
# specifiy logger
loggers = ["akka.event.slf4j.Slf4jLogger"]
loglevel = "DEBUG"
stdout-loglevel = "DEBUG"
# configure remote connection
actor {
provider = "akka.cluster.ClusterActorRefProvider"
}
remote {
enabled-transport = ["akka.remote.netty.tcp"]
netty.tcp {
hostname = "127.0.0.1"
port = 3100
}
}
cluster {
seed-nodes = [
"akka.tcp://mycluster#127.0.0.1:3100"
]
}
}
And a very simple main program:
public class MediatorNodeStartup {
public static void main(String[] args) {
String port = System.getProperty("config.port") == null ? "3100" : System.getProperty("config.port");
Config config = ConfigFactory.parseString("akka.remote.netty.tcp.port=" + port)
.withFallback(ConfigFactory.load());
ActorSystem system = ActorSystem.create("mycluster", config);
}
}
Akka, Akka-Remote and Akka-Cluster are all included via maven and visible in the class path.
Now when I execute this, it just fails with a ClassNotFoundException: akka.cluster.ClusterActorRefProvider
although the akka.cluster.* package definitely is in my classpath.
Strangely enough, on another machine this code just works.
So I suppose it has something to do with my eclipse or runtime configuration... but sadly I have no idea where to start searching for the error.
Any ideas? I will provide further information if necessary.

Categories

Resources