Java Bloomberg API - how to generate a Request without a Service - java

I am using the Bloomberg API to grab data. Currently, I have 3 processes which get data in the typical way as per the developers guide. Something like:
Service refDataService = session.getService("//blp/refdata");
Request request = refDataService.createRequest("ReferenceDataRequest");
request.append("securities", "IBM US Equity");
request.append("fields", "PX_LAST");
cid = session.sendRequest(request, null);
That works. Now I would like to expand the logic to be something more like an update queue. I would like each process to send their Request to an update queue process, which would in turn be responsible for creating the session and service, and then sending the requests. However, I don't see of any way to create the request without the Service. Also, since the request types (referenceData, historical data, intraday ticks) are so varied and have such different properties, it is not trivial to create a container object which my update queue could read.
Any ideas on how to accomplish this? My ultimate goal is to have a process (I'm calling update queue) which takes in a list of requests, removes any duplicates, and goes out to Bloomberg for the data in 30 second intervals.
Thank you!

I have updated the jBloomberg library to include tick data. You can submit different types of query to a BloombergSession which acts as a queue. So if you want to submit different types of request you can write something like:
RequestBuilder<IntradayTickData> tickRequest =
new IntradayTickRequestBuilder("SPX Index",
DateTime.now().minusHours(2),
DateTime.now());
RequestBuilder<IntradayBarData> barRequest =
new IntradayBarRequestBuilder("SPX Index",
DateTime.now().minusHours(2),
DateTime.now())
.period(5, TimeUnit.MINUTES);
RequestBuilder<ReferenceData> refRequest =
new ReferenceRequestBuilder("SPX Index", "NAME");
Future<IntradayTickData> ticks = session.submit(tickRequest);
Future<IntradayBarData> bars = session.submit(barRequest);
Future<ReferenceData> name = session.submit(refRequest);
More examples available in the javadoc.
If you need to fetch the same information regularly, you can reuse a builder and use it in combination with a ScheduledThreadPoolExecutor for example.
Note: the library is still in beta state so don't use it blindly in an black box that trades automatically!

Related

Using Redis Stream to Block HTTP response via HTTP long polling in Spring Boot App

I have a spring boot web application with the functionality to update an entity called StudioLinking. This entity describes a temporary, mutable, descriptive logical link between two IoT devices for which my web app is their cloud service. The Links between these devices are ephemeral in nature, but the StudioLinking Entity persists on the database for reporting purposes. StudioLinking is stored to the SQL based datastore in the conventional way using Spring Data/ Hibernate. From time to time this StudioLinking entity will be updated with new information from a Rest API. When that link is updated the devices need to respond (change colors, volume, etc). Right now this is handled with polling every 5 seconds but this creates lag from when a human user enters an update into the system and when the IoT devices actually update. It could be as little as a millisecond or up to 5 seconds! Clearly increasing the frequency of the polling is unsustainable and the vast majority of the time there are no updates at all!
So, I am trying to develop another Rest API on this same application with HTTP Long Polling which will return when a given StudioLinking entity is updated or after a timeout. The listeners do not support WebSocket or similar leaving me with Long Polling. Long polling can leave a race condition where you have to account for the possibility that with consecutive messages one message may be "lost" as it comes in between HTTP requests (while the connection is closing and opening, a new "update" might come in and not be "noticed" if I used a Pub/Sub).
It is important to note that this "subscribe to updates" API should only ever return the LATEST and CURRENT version of the StudioLinking, but should only do so when there is an actual update or if an update happened since the last checkin. The "subscribe to updates" client will initially POST an API request to setup a new listening session and pass that along so the server knows who they are. Because it is possible that multiple devices will need to monitor updates to the same StudioLinking entity. I believe I can acomplish this by using separately named consumers in the redis XREAD. (keep this in mind for later in the question)
After hours of research I believe the way to acomplish this is using redis streams.
I have found these two links regarding Redis Streams in Spring Data Redis:
https://www.vinsguru.com/redis-reactive-stream-real-time-producing-consuming-streams-with-spring-boot/
https://medium.com/#amitptl.in/redis-stream-in-action-using-java-and-spring-data-redis-a73257f9a281
I also have read this link about long polling, both of these links just have a sleep timer during the long polling which is for demonstration purposes but obviously I want to do something useful.
https://www.baeldung.com/spring-deferred-result
And both these links were very helpful. Right now I have no problem figuring out how to publish the updates to the Redis Stream - (this is untested "pseudo-code" but I don't anticipate having any issues implementing this)
// In my StudioLinking Entity
#PostUpdate
public void postToRedis() {
StudioLinking link = this;
ObjectRecord<String, StudioLinking> record = StreamRecords.newRecord()
.ofObject(link)
.withStreamKey(streamKey); //I am creating a stream for each individual linking probably?
this.redisTemplate
.opsForStream()
.add(record)
.subscribe(System.out::println);
atomicInteger.incrementAndGet();
}
But I fall flat when it comes to subscribing to said stream: So basically what I want to do here - please excuse the butchered pseudocode, it is for idea purposes only. I am well aware that the code is in no way indicative of how the language and framework actually behaves :)
// Parameter studioLinkingID refers to the StudioLinking that the requester wants to monitor
// updateList is a unique token to track individual consumers in Redis
#GetMapping("/subscribe-to-updates/{linkId}/{updatesId}")
public DeferredResult<ResponseEntity<?>> subscribeToUpdates(#PathVariable("linkId") Integer linkId, #PathVariable("updatesId") Integer updatesId) {
LOG.info("Received async-deferredresult request");
DeferredResult<ResponseEntity<?>> output = new DeferredResult<>(5000l);
deferredResult.onTimeout(() ->
deferredResult.setErrorResult(
ResponseEntity.status(HttpStatus.REQUEST_TIMEOUT)
.body("IT WAS NOT UPDATED!")));
ForkJoinPool.commonPool().submit(() -> {
//----------------------------------------------
// Made up stuff... here is where I want to subscribe to a stream and block!
//----------------------------------------------
LOG.info("Processing in separate thread");
try {
// Subscribe to Redis Stream, get any updates that happened between long-polls
// then block until/if a new message comes over the stream
var subscription = listenerContainer.receiveAutoAck(
Consumer.from(studioLinkingID, updateList),
StreamOffset.create(studioLinkingID, ReadOffset.lastConsumed()),
streamListener);
listenerContainer.start();
} catch (InterruptedException e) {
}
output.setResult("IT WAS UPDATED!");
});
LOG.info("servlet thread freed");
return output;
}
So is there a good explanation to how I would go about this? I think the answer lies within https://docs.spring.io/spring-data/redis/docs/current/api/org/springframework/data/redis/core/ReactiveRedisTemplate.html but I am not a big enough Spring power user to really understand the terminology within Java Docs (the Spring documentation is really good, but the JavaDocs is written in the dense technical language which I appreciate but don't quite understand yet).
There are two more hurdles to my implementation:
My exact understanding of spring is not at 100% yet. I haven't yet reached that a-ha moment where I really fully understand why all these beans are floating around. I think this is the key to why I am not getting things here... The configuration for the Redis is floating around in the Spring ether and I am not grasping how to just call it. I really need to keep investigating this (it is a huge hurdle to spring for me).
These StudioLinking are short lived, so I need to do some cleanup too. I will implement this later once I get the whole thing up off the ground, I do know it will be needed.
Why don't you use a blocking polling mechanism? No need to use fancy stuff of spring-data-redis. Just use simple blocking read of 5 seconds, so this call might take around 6 seconds or so. You can decrease or increase the blocking timeout.
class LinkStatus {
private final boolean updated;
LinkStatus(boolean updated) {
this.updated = updated;
}
}
// Parameter studioLinkingID refers to the StudioLinking that the requester wants to monitor
// updateList is a unique token to track individual consumers in Redis
#GetMapping("/subscribe-to-updates/{linkId}/{updatesId}")
public LinkStatus subscribeToUpdates(
#PathVariable("linkId") Integer linkId, #PathVariable("updatesId") Integer updatesId) {
StreamOperations<String, String, String> op = redisTemplate.opsForStream();
Consumer consumer = Consumer.from("test-group", "test-consumer");
// auto ack block stream read with size 1 with timeout of 5 seconds
StreamReadOptions readOptions = StreamReadOptions.empty().block(Duration.ofSeconds(5)).count(1);
List<MapRecord<String, String, String>> records =
op.read(consumer, readOptions, StreamOffset.latest("test-stream"));
return new LinkStatus(!CollectionUtils.isEmpty(records));
}

REST Service Task

I am following an example REST Service Task
I start my process engine using
val configuration = new StandaloneProcessEngineConfiguration(); configuration.setProcessEngineName(processEngineName)
Here is my bpmn file snippet
<process id="approve-loan" name="Loan Approval" isExecutable="true">
<serviceTask id="process_task" activiti:class="com.noggin.bpm.loan.ProcessRequestDelegate" activiti:exclusive="true" name="compute
Task">
<extensionElements>
<activiti:connector>
<activiti:connectorId>http-connector</activiti:connectorId>
<activiti:inputOutput>
<activiti:inputParameter name="url">http://127.0.0.1:5004/Hello/sayhello</activiti:inputParameter>
<activiti:inputParameter name="method">POST</activiti:inputParameter>
<activiti:inputParameter name="headers">
<activiti:map>
<activiti:entry key="Accept">application/json</activiti:entry>
<activiti:entry key="Content-type">application/json</activiti:entry>
</activiti:map>
</activiti:inputParameter>
<activiti:inputParameter name="payload"><![CDATA[{"bundleId":"101","script":"def greet = {\n \"Hello World\"\n }\n greet()"}]]></activiti:inputParameter>
<activiti:outputParameter name="isActive">Result</activiti:outputParameter>
</activiti:inputOutput>
</activiti:connector>
</extensionElements>
I start the process like this
val processEngine = ProcessEngines.getProcessEngine(processEngineName)
val runtime = processEngine.getRuntimeService
val processInstance = runtime.startProcessInstanceByKey(processInstanceKey)
Successfully, I am able to send the payload to ( http://127.0.0.1:5004/Hello/sayhello ).
My question is how to retrieve the response message from the location i started the instance. Since the response will be in a Json message which should be sent back to process initiator.
I believe I saw a similar question from you posted to the Camunda forum yesterday.
Either way, I believe the question and answer is the same.
Let me make sure I understand what you are asking.
1. You are starting the instance using the Java API
2. Your process definition includes a single Service Task that makes a REST call.
3. Your JavaDelegate class populates the "Result" process variable with the response of the REST call.
4. You want to capture the response.
If I have captured your requirement, then I think the problem is in your understanding of how he BPMN engine works.
With the process as you have it modeled, the process instance will start, make the REST call, populate the Response variable and then immediately end.
As you have currently modeled the process, you will not be able to capture the response during process execution.
Your options:
1. Change your model to either send the "Result" using a message service of some sort, or add a wait state where you can retrieve the response.
2. Use the Historical query REST API (or the equivalent Java API) to retrieve the Result payload from the completed instance.
It really depends on your use case as to the most appropriate option to take.
Cheers,
Greg

How to balance the load when building cache from App Engine?

I currently have the following situation, which has bothered me for a couple of months right now.
The case
I have build a Java (FX) application which serves as a cash registry for my shop. The application contains a lot of classes (such as Customer, Customer, Transaction etc.), which are shared with the server API. The server API is hosted on Google App Engine.
Because we also have an online shop, I have chosen to build the cache of the entire database on startup of the application. To do this I call the GET of my Data API for each class/table:
protected QueryBuilder performGet(HttpServletRequest req, HttpServletResponse res)
throws ServletException, IOException, ApiException, JSONException {
Connection conn = connectToCloudSQL();
log.info("Parameters: "+Functions.parameterMapToString(req.getParameterMap()));
String tableName = this.getTableName(req);
log.info("TableName: "+tableName);
GetQueryBuilder queryBuilder = DataManager.executeGet(conn, req.getParameterMap(), tableName, null);
//Get the correct method to create the objects
String camelTableName = Functions.snakeToCamelCase(tableName);
String parsedTableName = Character.toUpperCase(camelTableName.charAt(0)) + camelTableName.substring(1);
List<Object> objects = new ArrayList<>();
try {
log.info("Parsed Table Name: "+parsedTableName);
Method creationMethod = ObjectManager.class.getDeclaredMethod("create"+parsedTableName, ResultSet.class, boolean.class);
while (queryBuilder.getResultSet().next()) {
//Create new objects with the ObjectManager
objects.add(creationMethod.invoke(null, queryBuilder.getResultSet(), false));
}
log.info("List of objects created");
creationMethod = null;
}
catch (Exception e) {
camelTableName = null;
parsedTableName = null;
objects = null;
throw new ApiException(e, "Something went wrong while iterating through ResultSet.", ErrorStatus.NOT_VALID);
}
Functions.listOfObjectsToJson(objects, res.getOutputStream());
log.info("GET Request succeeded");
//Clean up objects
camelTableName = null;
parsedTableName = null;
objects = null;
closeConnection(conn);
return queryBuilder;
}
It simples gets every row from the requested table in my Cloud SQL database. Then it creates the objects, with the class that is shared with the client application. Lastly, it converts these classes to JSON using GSON. Some of my tables have 10.000+ rows, and then it takes approx. 5-10 sec to do this.
At the client, I convert this JSON back to a list of objects by using the same shared class. First I load the essential classes sequentially (because else the application won't start), and after that I load the rest of the classes in the background with separate threads.
The problem
Every time I load up the cache, there is a chance (1 on 4) that the server responds with a DeadlineExceededException on some of the bigger tables. I believe this has something to do with Google App Engine not being able to fire up a new instance in time, and therefore the computation time exceeds the limit.
I know it has something to do with loading the objects in background threads, because these all start at the same time. When I delay the start of these threads with 3 seconds, the error occurs a lot less, but is still present. Because the application loads 15 classes in the background, delaying them is not ideal because the application will only work partly until it is done. It is also not an option to load everything before starting, because this will take more than 2 minutes.
Does anyone know how to set up some load balancing on Google App Engine for this? I would like to solve this server side.
You clearly have an issue with warm up requests and a query that takes quite long. You have the usual options:
Do some profiling and reduce the cost of your method invocations
use caching (memcache) to cache some of the result
If those options don't work for you, you should parallelize your computations. One thing that comes to my mind is that you could reliably reduce request times if you simply split your request into multiple parallel requests like so:
Let's say your table contains 5k rows.
Then you create 50 requests with each handleing 100 rows.
Aggregate the results on server or client side and respond
It'll be quite tough to do this on just the server side but it should be possible if your now (much) smaller taks return within a couple of seconds.
Alternatively you could return a job id at once and make the client poll for the result in a couple of seconds. This would however require a small change on the client side. It's the better option though imho, especially if you want to use a task queue for creating your response.

JBossESB - queue to service mapping

I am intercepting messages that are sent through JBossESB. I am using pipeline interceptors to do so.
The problem is, that altough the sender is a service (for example PortReference < logical:BlueServiceESB#BlueListener >), the name of the receiver is a queue (not a service). That is logical because in some case, multiple services can receive messages from a given queue, but usually, each queue is mapped to only one service.
I would like to know which queue is mapped to which service, so I can display/save this information and have it displayed like message: service ---> service (not service ---> queue).
I know that I can get the name of the queue mapped to a service using the registry like this:
System.setProperty("javax.xml.registry.ConnectionFactoryClass", "org.apache.ws.scout.registry.ConnectionFactoryImpl");
// Retrieving information from the ESB Registry
Registry reg = RegistryFactory.getRegistry();
System.out.println(reg.findAllServices());
List<EPR> eprs = reg.findEPRs("FirstServiceESB", "SimpleListener");
System.out.println(eprs);
I would like to reverse this approach - queue is the input and service (EPR = end point reference = service) is the output. Is there any way how to do this or am I just trying to do the impossible here. I have found no tutorials or questions on this topic whatsoever.
Thanks for any tips!
As this question has 25 up-votes, this seems to be an useful feature. JBossESB is open source software. Thus, implement the feature yourself and commit it to the community! Or just create a change request hopping that somebody else will do it...
Try querying for all of the queues and building a reverse-lookup map. But I don't think there is any function that allows searching for services using a queue.

Measuring Task Queue Costs in Google App Engine

I am measuring the cost of requests to GAE by inspecting the x-appengine-estimated-cpm-us-dollars header. This works great and in combination with x-appengine-resource-usage and
x-traceurl I can even get more detailed information.
However, a large part of my application run in the context of task queues. Thus, a huge part of the instance hour costs are consumed by queues. Each time code is executed outside of a request its costs are not included in the x-appengine-estimated-cpm-us-dollars header.
I am looking for a way to measure the full costs consumed by each request. I.e. costs generated by the request itself and the cost of the tasks that have been added by this request.
It is an overkill. There is a tool you can download google app engine log and convert them to sqlite.
http://code.google.com/p/google-app-engine-samples/source/browse/trunk/logparser/logparser.py
With this tool, cpm usd for both task request and normal request would be all downloaded together. You can store daily log into separate sqlite file and do as much analysis as you want.
In terms of relate the cost of task back to original request. The log data downloaded with this tool includes the full output of logging module.
So you can simply logging an generate id in the original request
pass the id to task.
logging the received id again in the task request.
find normal and task request pair via id.
for example:
# in org request
a_id = genereate_a_random_id()
logging.info(a_id) # the id will be included
taskqueue.add(url='/path_to_task', params={'id': a_id})
# in task request
a_id = self.request.get('id')
logging.info(a_id)
EDIT1
I think there is another possible way to estimate the cost of normal request + task request.
The trick is change the async task to sync (assume the cost would be the same).
I didn't try it but it is much easier to try.
# in org request, add a variable to identify debug
debug = self.request.get('DEBUG')
if debug:
self.redirect('/path_to_task')
else:
taskqueue.add(url='/path_to_task')
Thus, while testing the normal request with DEBUG parameter. It will firstly process the normal request then return x-appengine-estimated-cpm-us-dollars for normal request. Later it will redirect your test client to the relative task request (task request could also be access and trigger via url client as normal request) and return x-appengine-estimated-cpm-us-dollars for task request. You can simply add them together to get the total cost.

Categories

Resources