Schedule message for Azure Service Bus with JMS - java

I want to send a scheduled message to the Azure Service Bus with JMS.
My code is based on org.apache.qpid.jms.message.JmsMessage. I've found one solution for the given problem, but it uses org.apache.qpid.proton.message.Message which has .getMessageAnnotations(), which allows to edit message annotations and add some properties that are correctly recognized and processed by Azure Service Bus. My message impl is missing that method.
What I've found in offical docs and implementations in node.js, to schedule a message with Azure Service Bus, you need to send header BrokerProperties/brokerProperties which has valid json.
Other headers/properties will be marked as Customer properties and ignored by Azure Service Bus.
official azure docs about JMS says that setting ScheduledEnqueueTimeUtc is not officialy supported by JMS API. But it can be achieved manually by setting property.
So when I send message to the queue, then I can post process it in lambda and set some properties:
jmsTemplate.convertAndSend(queue, payload, message -> {
var date = Date.from(ZonedDateTime.now(ZoneId.of("UTC")).plus(delay, ChronoUnit.MILLIS).toInstant());
var brokerProps = Map.of("ScheduledEnqueueTimeUtc", date.toGMTString());
message.setStringProperty(
"brokerProperties",
objectMapper.writeValueAsString(brokerProps)
);
return message;
});
And it doesn't work. The message arrives at the queue, but when I try to peek it on the Service Bus Explorer on Azure it throws error in the browser console and the operation lasts forever. I guess setting that property brokerProperties does some impact for Service Bus.
I have also tried to send a map with date as a string (with date format that is used by the Azure) like "ScheduledEnqueueTimeUtc", "Thu, 25 Mar 2021 12:54:00 GMT", but it also is recognized as an error by Service Bus (peeking lasts forever and error in the browser console is thrown).
I've tried to set string properties like x-opt-scheduled-enqueue-time or x-ms-scheduled-enqueue-time which I've found in other threads on SO, but none of them works with my example.
I saw that Microsoft gives some library for Java to communicate with Azure Service Bus, but I need to maintain independency from the Cloud provider in my code and don't include any additional libs.
Is there any example of using JMS message implementation from the package org.apache.qpid.jms.message.JmsMessage to set BrokerProperties for Azure Service Bus?

My team is currently facing the same issue.
We found that the ScheduledEnqueueTimeUtc property is set in the MessageAnnotationsMap. Unfortunately, the org.apache.qpid.jms.provider.amqp.message.AmqpJmsMessageFacade, which is used by JMS, has set the getter and setter to package private. But we found out that you can use the setTracingAnnotation(String key, Object value) Method.
Example:
public void sendDelayedMessage() {
final var now = ZonedDateTime.now();
jmsTemplate.send("test-queue", session -> {
final var tenMinutesFromNow = now.plusMinutes(10);
final var textMessage = session.createTextMessage("Hello Service Bus!");
((JmsTextMessage) textMessage).getFacade().setTracingAnnotation("x-opt-scheduled-enqueue-time", Date.from(tenMinutesFromNow.toInstant()));
return textMessage;
});
log.info("Sent at: " + now);
}
Proof:
Big Thanks to my teammate!!

Related

Java Custom Google Analytics 4 Server-Side Event User-IP

In my current Java project, it's easy to track server-side user events in the "old" Google Analytics Universal Project with simple REST calls to Google Analytics. So that location tracking was working, i could override the server ip with the user ip, according to the parameter "&uip=1.2.3.4" (https://developers.google.com/analytics/devguides/collection/protocol/v1/parameters?hl=de#uip).
As upgrading to GA4 is recommended, I was able to change all the REST parameters in my project and show my events in the new dashboard, except for the user location. I can't find any information about such a parameter. I tried using still "uip" but now all my requests are located to the country of my server.
Unfortunately it's not possible to track the event client side, because my project is a simple REST API, returning only JSON data.
Does anyone have an idea, if there's such a parameter like "uip" for ga4 or if this isn't possible anymore?
In the following way I setup my parameters:
private String getQueryParameters(MeasurementEvent event) {
StringBuilder body = new StringBuilder();
body.append("?v=").append(version);
body.append("&tid=").append(trackingId);
body.append("&cid=").append(event.getClientId());
body.append("&en=").append(eventName);
body.append("&aip=1");
if (StringUtils.hasText(event.getAction())) {
body.append("&ep.useraction=").append(event.getAction());
}
if (StringUtils.hasText(event.getCategory())) {
body.append("&ep.awsregion=").append(event.getCategory());
}
if (StringUtils.hasText(event.getLabel())) {
body.append("&ep.softwarename=").append(event.getLabel());
}
if (StringUtils.hasText(event.getRemoteAddress())) {
body.append("&uip=").append(event.getRemoteAddress());
}
if (StringUtils.hasText(event.getUrl())) {
body.append("&dl=").append(event.getUrl());
}
return body.toString();
}

How to get instanceid from cloud_run?

The logs from cloud run spit out some good json with resource.labels.revision_name = my_name-00046-kip.
The json path labels.instanceId is more like this though
00bf4bf02d71261c0c1f55a601331b336a5d90d365cca1b28330dcf3e456fb7c07d5b72f1d3c9a971e391b5edc3512aea8559d172b24e639
per this document I was able to get revision_name
https://cloud.google.com/run/docs/reference/container-contract#env-vars
but I can't get the instance id and metrics must be reported per instance or two instances reporting in the same minute will be rejected. how do I get instance id (preferably through DockerFile and if not through api call). If cloud run boots up 10 instances under one revision name, I have to make sure to uniquely report metrics to Generic Task resource where I plan on filling in job_id with the instance id.
thanks,
Dean
Please try using the metadata server to get the instance ID using the url:
http://metadata.google.internal/computeMetadata/v1/instance/id
Note that "Metadata-Flavor: Google" header is also required.
If you're using Java (as indicated by the tags), the easiest way to get the instance ID from the "internal metadata server" programmatically is probably to include the dependency com.google.cloud:google-cloud-core:1.93.5 (or newer) through Gradle/Maven and then call the following method:
import com.google.cloud.MetadataConfig;
String instanceId = MetadataConfig.getInstanceId();
The entries in the logging in Stackdriver is as follows
labels: {
instanceId: "00bf4bf02d4b374e91dda64bc4c4241a218302c4bcc73a01ecf85e582127e8c8076fcbe18b3cc934f5ed33e5dc1348c58cfd40cbecc0c9ae2a0b6d2356"
}
labels: {
configuration_name: "cloudrunservice"
location: "us-central1"
project_id: "xxxx-xxxx-000"
revision_name: "cloudrunservice-00002-leq"
service_name: "cloudrunservice"
}
type: "cloud_run_revision"
As you mentioned, each one has the instance Id, Revision name, and Service name. In this way, you do not have to worry about rejected entries in the logging by the same instance / time.
I could no see something related with the instances ID in the UI, managing Revisions. Handling this JSON from logging you could get the InsanceID.

Errors with notification endpoint for local docker registry

I have successfully deployed a local docker registry and implemented an listener endpoint to receive event notifications following the documentation for configuration using a sample insecure configuration file. Pushing, pulling and listing images work well. However, i still receive no event notification. The registry logs are throwing some errors i do not really understand:
level=error msg="retryingsink: error writing events: httpSink{http://localhost:5050/event}: error posting: Post http://localhost:5050/event: dial tcp 127.0.0.1:5050: getsockopt: connection refused, retrying"
I will appreciate any info.
The endpoint listener is implemented in java
#RequestMapping(value="/event",method = RequestMethod.POST,consumes = "application/json")
public void listener(#RequestBody Events event) {
Event[] e = event.getEvents();
Event ee = new Event();
for (int i = 0; i < e.length; i++) {
System.out.println(e.length);
System.out.println(e[i].toString());
}
So after several hours of research i.e. inspecting the private registry logs, i realized the media-type of the messages posted by registry listener to notification endpoints is either application/octet stream or application/vnd.docker.distribution.manifest.v2+json. Hence my solution was to permit all media types using the annotation consumes = "*/*" as specified in this Spring documentation i.e.
#RequestMapping(value="/event",method = RequestMethod.POST,consumes = "*/*")

Vertx HttpClient getNow not working

I have problem with vertx HttpClient.
Here's code which shows that tests GET using vertx and plain java.
Vertx vertx = Vertx.vertx();
HttpClientOptions options = new HttpClientOptions()
.setTrustAll(true)
.setSsl(false)
.setDefaultPort(80)
.setProtocolVersion(HttpVersion.HTTP_1_1)
.setLogActivity(true);
HttpClient client = vertx.createHttpClient(options);
client.getNow("google.com", "/", response -> {
System.out.println("Received response with status code " + response.statusCode());
});
System.out.println(getHTML("http://google.com"));
Where getHTML() is from here: How do I do a HTTP GET in Java?
This is my output:
<!doctype html><html... etc <- correct output from plain java
Feb 08, 2017 11:31:21 AM io.vertx.core.http.impl.HttpClientRequestImpl
SEVERE: java.net.UnknownHostException: failed to resolve 'google.com'. Exceeded max queries per resolve 3
But vertx can't connect. What's wrong here? I'm not using any proxy.
For reference: a solution, as described in this question and in tsegismont's comment here, is to set the flag vertx.disableDnsResolver to true:
-Dvertx.disableDnsResolver=true
in order to fall back to the JVM DNS resolver as explained here:
sometimes it can be desirable to use the JVM built-in resolver, the JVM system property -Dvertx.disableDnsResolver=true activates this behavior
I observed this DNS resolution issue with a redis client in a kubernetes environment.
I had this issue, what caused it for me was stale DNS servers being picked up by the Java runtime, i.e. servers registered for a network the machine was no longer connected to. The issue is first in the Sun JNDI implementation, it also exists in Netty which uses JNDI to bootstrap its list of name servers on most platforms, then finally shows up in VertX.
I think a good place to fix this would be in the Netty layer where the set of default DNS servers is bootstrapped. I have raised a ticket with the Netty project so we'll see if they agree with me! Here is the Netty ticket
In the mean time a fairly basic workaround is to filter the default DNS servers detected by Netty, based on whether they are reachable or not. Here is a code Sample in Kotlin to apply before constructing the main VertX instance.
// The default set of name servers provided by JNDI can contain stale entries
// This default set is picked up by Netty and in turn by VertX
// To work around this, we filter for only reachable name servers on startup
val nameServers = DefaultDnsServerAddressStreamProvider.defaultAddressList()
val reachableNameServers = nameServers.stream()
.filter {ns -> ns.address.isReachable(NS_REACHABLE_TIMEOUT)}
.map {ns -> ns.address.hostAddress}
.collect(Collectors.toList())
if (reachableNameServers.size == 0)
throw StartupException("There are no reachable name servers available")
val opts = VertxOptions()
opts.addressResolverOptions.servers = reachableNameServers
// The primary Vertx instance
val vertx = Vertx.vertx(opts)
A little more detail in case it is helpful. I have a company machine, which at some point was connected to the company network by a physical cable. Details of the company's internal name servers were set up by DHCP on the physical interface. Using the wireless interface at home, DNS for the wireless interface gets set to my home DNS while the config for the physical interface is not updated. This is fine since that device is not active, ipconfig /all does not show the internal company DNS servers. However, looking in the registry they are still there:
Computer\HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters\Interfaces
They get picked up by the JNDI mechanism, which feeds Netty and in turn VertX. Since they are not reachable from my home location, DNS resolution fails. I can imagine this home/office situation is not unique to me! I don't know whether something similar could occur with multiple virtual interfaces on containers or VMs, it could be worth looking at if you are having problems.
Here is the sample code which works for me.
public class TemplVerticle extends HttpVerticle {
public static void main(String[] args) {
Vertx vertx = Vertx.vertx();
// Create the web client and enable SSL/TLS with a trust store
WebClient client = WebClient.create(vertx,
new WebClientOptions()
.setSsl(true)
.setTrustAll(true)
.setDefaultPort(443)
.setKeepAlive(true)
.setDefaultHost("www.w3schools.com")
);
client.get("www.w3schools.com")
.as(BodyCodec.string())
.send(ar -> {
if (ar.succeeded()) {
HttpResponse<String> response = ar.result();
System.out.println("Got HTTP response body");
System.out.println(response.body().toString());
} else {
ar.cause().printStackTrace();
}
});
}
}
Try using web client instead of httpclient, here you have an example (with rx):
private val client: WebClient = WebClient.create(vertx, WebClientOptions()
.setSsl(true)
.setTrustAll(true)
.setDefaultPort(443)
.setKeepAlive(true)
)
open fun <T> get(uri: String, marshaller: Class<T>): Single<T> {
return client.getAbs(host + uri).rxSend()
.map { extractJson(it, uri, marshaller) }
}
Another option is to use getAbs.

Exchange Web Services get Message Message-ID

I'm using the Java EWS library to try to sync messages from an Exchange mailbox. I'm able to get a list off all new messages created since the last sync date, however, I would really like to find out the Message-ID property of the message before loading it from exchange.
Background: I'm trying to integrate EWS sync into an existing mail storage system. The Message-ID identification is solely for performance reasons, as our system already has millions of messaged processed outside of EWS. Having to download them again would cause major performance overhead.
//Sample code to fetch the message from sync
ChangeCollection<ItemChange> icc = service.syncFolderItems( folder.getId()
, PropertySet.FirstClassProperties // propertySet
, null // ignoredItemIds
, 25 // maxChangesReturned
, SyncFolderItemsScope.NormalItems
, currSyncState );
for ( ItemChange ic : icc )
{
if (ic.getChangeType() == ChangeType.Create)
{
Item item = ic.getItem();
//how to get the Message-ID
}
Right now, the best way I see to retrieve the Message-ID is by calling ic.getItem().getInternetMessageHeaders() after calling ic.load(). But that requires loading the entire message from exchange, and I would to avoid this step.
Edit: Another way to grab the Message-ID is
EmailMessage em = EmailMessage.bind( service, item.getId() );
em.getInternetMessageId()
However, that still loads the entire message.
The other solution is to start associating messages by the ItemId, but even that's not perfect: http://daniellang.net/exchange-web-services-itemid-is-not-permanent/
More about Message-ID: http://en.wikipedia.org/wiki/Message-ID
I believe the solution is this:
EmailMessage em = EmailMessage.bind( service, item.getId(),
new PropertySet( EmailMessageSchema.InternetMessageId) );
Explanation :
We have to bind the item to an email message, but instead of grabbing all the info, we only ask for the ID and any additional properties we want through the PropertySet parameter.
Inspired by this answer: https://stackoverflow.com/a/22482779/138228

Categories

Resources