AS400 JOB Queue via Java jt400 - java

I am just writing an Interface between a java application and an AS400.
For this purpose I use jt400. I managed to get information about the systemstatus like CPU usage, as well I managed to receive the current status about subsystems and jobs.
Now I am searching for an option to have a look at the different job queues inside the AS400.
For example: I would like to know, how many jobs are in which queue.
Is there a solution via jt400 or a different approach to access those information via java?
The corresponding command inside AS400 is WRKJOBQ
Best
LStrike
[Edit]
The following code is my filter for JobList. But how do I configure QSYSObjectPathName that it is matching WRKJOBQ?
QSYSObjectPathName path = new QSYSObjectPathName(.....);
JobList jList = new JobList(as400);
jList.addJobSelectionCriteria(JobList.SELECTION_PRIMARY_JOB_STATUS_JOBQ, true);
jList.addJobSelectionCriteria(JobList.SELECTION_JOB_QUEUE, path.getPath());
Job[] jobs = jList.getJobs(-1, 1);
System.out.println("Jobs Size: " + jobs.length);

You can use a JobList object for that, using SELECTION_JOB_QUEUE to filter jobs.
Once your selection suits your need, JobList#getLength() will give you the number of jobs.
See also this question

Related

Is this the correct waiting strategy when sending commands with testcontainers?

I am using Testcontainers DockerComposeContainer and sending shell commands using the execInContainer method once my containers are up and running:
#ClassRule
public static DockerComposeContainer<?> environment =
new DockerComposeContainer<>(new File("docker-compose.yml"))
.withExposedService(DB_1, DB_PORT)
.waitingFor(SERVICE_1, Wait.defaultWaitStrategy())
.withLocalCompose(true);
One of the commands is to simply move a file which will then be processed and I want to wait until the process inside the container processes it before checking the results in my test.
service.execInContainer("cp", "my-file.zip", "/home/user/dir");
The way I'm checking to see if the process has consumed the my-file.zip, once it has been moved, is to inspect the logs:
String log = "";
while (!log.contains("Moving my-file.zip file to /home/user/dir")) {
ExecResult cat = soar.execInContainer("cat", "/my-service/logs/service.log");
log = cat.getStdout();
}
This works, but I don't like the constant polling very much inside the while loop and was wondering if there is a better way to achieve this.
I've been looking internally into testcontainers and it makes use of the java dockerapi so I wondered if there is a better way to do this via that API or if I could do the waiting using a library like Awaitility.
Thanks for any suggestions

How to listen to multiple collections using change stream in spring

I wants to collect data from to collection_one, collection_two, collection_three etc, how do I do that ?
ChangeStreamRequest request = ChangeStreamRequest.builder().collation(Collation.of("collection_one"))
.filter(Aggregation.newAggregation(match(where("operationType").exists(true))))
.publishTo(krakenDtoMessageListener)
.build();
container.register(request, CollectionOne.class);
Should I create multiple changeStreamRequest or one should be fine ?
While configuring change stream, you can specify filter on collection.
Check below java code,
List<Bson> pipeline = singletonList(Aggregates.match(
Filters.in("ns.coll", asList("coll1", "coll2", "coll3"))
));
MongoCursor<ChangeStreamDocument<Document>> cursor = db.watch(pipeline).fullDocument(FullDocument.UPDATE_LOOKUP).iterator();
You can write similar pipeline using spring framework.
Cannot watch for multiple collections. According to mongoDB documentation, there are 3 options. watch a specific collection, watch a database or watch the deployment (all the databases) refer mongo documentation for more details

Kafka streams app seperate reads from writes

I am pretty new to Kafka and Kafka Streams so please bear with me. I would like to know if I am on the right track here.
I am writing to a Kafka topic at the moment and try to access the data through a rest service. The raw data kind of needs to be transformed before it will be accessed.
What I have so far is a producer that writes the raw data into a topic.
1.) Now I want streams App (should be a jar running in a container) that just transforms the data in my desired shape. Following the materialized view paradigm here.
Over simplified version of 1.)
KStreamBuilder builder = new KStreamBuilder();
KStream<String, String> source =
builder.stream("my-raw-data-topic");
KafkaStreams streams = new KafkaStreams(builder,props);
KTable<String, Long> t = source.groupByKey().count("My-Table");
streams.start();
2.) And another streams App (should be a jar running in a container) that justs holds the KTable as some sort of Repository which can be accessed via a wrapping rest service.
Here I am kind of stuck with the proper way to work with the api.
What is the bare minimun to access and query a KTable? Do I need to assign the transformation topology to the builder again?
KStreamBuilder builder = new KStreamBuilder();
KTable table = builder.table("My-Table"); //Casting?
KafkaStreams streams = new KafkaStreams(builder, props);
RestService service = new RestService(table);
// Use the Table as Repository which is wrapped by a Rest-Service and gets updated reactivly
Right now this is pseudo code
Am I on the right path here? Does is make sense to separate 1.) and 2.)? Is this the indented way to work with streams to materialize views? For me, it would have the benefit to scale up the writes and the reads via container independently where I see more traffic.
How is the repopulating of the KTable handled on a crash of either 1.) or 2.). Is this done via replication to the streaming api or is this something I would need to address via code. Like resetting the cursor and reply the events?
Couple of comments:
In your code snippet (1) you modify your topology after you handed the builder into the KafkaStreams constructor:
KafkaStreams streams = new KafkaStreams(builder,props);
// don't modify builder anymore!
You should not do this but first specify you topology and afterwards create the KafkaStreams instance.
About splitting you application into two. This can make sense to scale both parts independently. But it's hard to say in general. However, if you do spit both, the first one needs to write the transformed date into an output topic and the second one should read this output topic as a table (builder.table("output-topic-of-transformation") to serve the REST requests.
For accessing the store of the KTable, you need to get a query handle via the provided store name:
ReadOnlyKeyValueStore keyValueStore =
streams.store("My-Table", QueryableStoreTypes.keyValueStore());
See the docs for further details:
http://docs.confluent.io/current/streams/developer-guide.html#interactive-queries

Extract JobID etc from Hadoop Job

I am running a Hadoop jar file inside a cluster. From the documentation, I know that Hadoop manages JobID, Start time etc. Is it possible to get the parameters so that we can show them on our web interface just to let user know how much time the job will consume (e.g. estimated duration)?
All the details shown in the Jobtracker UI can be obtained easily by using the APIs provided.
Use jobclient API refer : https://hadoop.apache.org/docs/current/api/org/apache/hadoop/mapred/JobClient.html
and Jobstatus api refer : https://hadoop.apache.org/docs/current/api/org/apache/hadoop/mapred/JobStatus.html
Using the combination of jobclient and jobstatus(jobsToComplete(), getAllJobs() ) you can retrieve the JobId . Once you get the jobId you can easily get all the other details by just calling the functions in the API.

AS400 Job's Thread details

I've already been retrieved details of a specific AS/400 job by its job number. I have a problem. I want to get that specific jobs thread detail. Some jobs have multi threading. I need to get specific job's list of multi threads and thread details. I'm checked jt400 doc for finding some class for it. But I'm failing to find :(
Thank in Advance!
JobList jobList = new JobList(System);
jobList.clearJobSelectionCriteria();
jobList.addJobSelectionCriteria(JobList.SELECTION_JOB_NUMBER, jobNumber);
Enumeration list = jobList.getJobs();
while (list.hasMoreElements()) {
Job j = (Job) list.nextElement();
System.out.println(j.getName());
System.out.println(j.getStatus());
System.out.println(j.getOutputQueue());
}
The API you're looking for is QWCOLTHD. JTOpen 8.1 was recently released and I don't see the QWCOLTHD API implemented.
It looks like you either need to email the developers and ask for this API, or write the implementation yourself. JTOpen is open source; you can get the source code and see how similar APIs are implemented and then write the appropriate classes for QWCOLTHD.

Categories

Resources