I am getting messages from a kafka topic which is sending me a JSON message. I would like to extract a field from that json message (which can be for ex. an ID) and I would like to create 'n' sessions for 'n' unique device IDs.
I have tried creating a new session instance for every unique ID that I am receiving, but after creating new session window instance i.e. creating a new branch in the pipeline for each IDs, I am unable to push the next upcoming messages to the corresponding branch which already exists.
The expected result that I want is, suppose we are getting messages like
{ID:1,...}, {ID:2,...}, {ID:3,...},{ID:1,...}
There would be three different sessions created and the fourth message would go to the session for device ID 1.
Is there a way to do this in apache beam programming paradigm or in Java Programming Paradigm ? Any help would be greatly appreciated.
Yes, this is possible with the Beam paradigm if you use a custom WindowFn. You can subclass the Sessions class and modify it to set gap durations differently based on the ID of each element. You can do this in assignWindows, which looks like this in Sessions:
#Override
public Collection<IntervalWindow> assignWindows(AssignContext c) {
// Assign each element into a window from its timestamp until gapDuration in the
// future. Overlapping windows (representing elements within gapDuration of
// each other) will be merged.
return Arrays.asList(new IntervalWindow(c.timestamp(), gapDuration));
}
The AssignContext class can be used to access the element being assigned this window, which will allow you to retrieve the ID of that element.
It also sounds like you want elements with different IDs to be grouped in different windows (i.e. if element A and B come in within the gap duration but with different IDs, they should still be in different windows). This can be done by performing a GroupByKey with the ID of your elements as keys. Session windows apply on a per-key basis as described in the Beam Programming Guide, so this will separate the elements by IDs.
I have implemented Java and Python examples for this use case. The Java one follows the approach suggested by Daniel Oliveira but I think it's nice to share a working sample.
Note that the Java example is featured in the Beam common patterns docs. Custom merging windows isn't supported in Python (with fnapi).
Java version:
We can adapt the code from Session windows to fit our use case.
Briefly, when records are windowed into sessions they get assigned to a window that begins at the element’s timestamp (unaligned windows) and adds the gap duration to the start to calculate the end. The mergeWindows function will then combine all overlapping windows per key resulting in an extended session.
We’ll need to modify the assignWindows function to create a window with a data-driven gap instead of a fixed duration. We can access the element through WindowFn.AssignContext.element(). The original assignment function is:
public Collection<IntervalWindow> assignWindows(AssignContext c) {
// Assign each element into a window from its timestamp until gapDuration in the
// future. Overlapping windows (representing elements within gapDuration of
// each other) will be merged.
return Arrays.asList(new IntervalWindow(c.timestamp(), gapDuration));
}
The modified function will be:
#Override
public Collection<IntervalWindow> assignWindows(AssignContext c) {
// Assign each element into a window from its timestamp until gapDuration in the
// future. Overlapping windows (representing elements within gapDuration of
// each other) will be merged.
Duration dataDrivenGap;
JSONObject message = new JSONObject(c.element().toString());
try {
dataDrivenGap = Duration.standardSeconds(Long.parseLong(message.getString(gapAttribute)));
}
catch(Exception e) {
dataDrivenGap = gapDuration;
}
return Arrays.asList(new IntervalWindow(c.timestamp(), dataDrivenGap));
}
Note that we have added a couple extra things:
A default value for cases where the custom gap is not present in the data
A way to set the attribute from the main pipeline as a method of the custom windows.
The withDefaultGapDuration and withGapAttribute methods are:
/** Creates a {#code DynamicSessions} {#link WindowFn} with the specified gap duration. */
public static DynamicSessions withDefaultGapDuration(Duration gapDuration) {
return new DynamicSessions(gapDuration, "");
}
public DynamicSessions withGapAttribute(String gapAttribute) {
return new DynamicSessions(gapDuration, gapAttribute);
}
We will also add a new field (gapAttribute) and constructor:
public class DynamicSessions extends WindowFn<Object, IntervalWindow> {
/** Duration of the gaps between sessions. */
private final Duration gapDuration;
/** Pub/Sub attribute that modifies session gap. */
private final String gapAttribute;
/** Creates a {#code DynamicSessions} {#link WindowFn} with the specified gap duration. */
private DynamicSessions(Duration gapDuration, String gapAttribute) {
this.gapDuration = gapDuration;
this.gapAttribute = gapAttribute;
}
Then, we can window our messages into the new custom sessions with:
.apply("Window into sessions", Window.<String>into(DynamicSessions
.withDefaultGapDuration(Duration.standardSeconds(10))
.withGapAttribute("gap"))
In order to test this we’ll use a simple example with a controlled input. For our use case we'll consider different needs for our users depending on the device where the app is running. Desktop users can be idle for long (allowing for longer sessions) whereas we only expect short-span sessions for our mobile users. We generate some mock data, where some messages contain the gap attribute and others omit it (gap window will resort to default for these ones):
.apply("Create data", Create.timestamped(
TimestampedValue.of("{\"user\":\"mobile\",\"score\":\"12\",\"gap\":\"5\"}", new Instant()),
TimestampedValue.of("{\"user\":\"desktop\",\"score\":\"4\"}", new Instant()),
TimestampedValue.of("{\"user\":\"mobile\",\"score\":\"-3\",\"gap\":\"5\"}", new Instant().plus(2000)),
TimestampedValue.of("{\"user\":\"mobile\",\"score\":\"2\",\"gap\":\"5\"}", new Instant().plus(9000)),
TimestampedValue.of("{\"user\":\"mobile\",\"score\":\"7\",\"gap\":\"5\"}", new Instant().plus(12000)),
TimestampedValue.of("{\"user\":\"desktop\",\"score\":\"10\"}", new Instant().plus(12000)))
.withCoder(StringUtf8Coder.of()))
Visually:
For the desktop user there are only two events separated 12 seconds. No gap is specified so it will default to 10s and both scores will not be added up as they will belong to different sessions.
The other user, mobile, has 4 events separated 2, 7 and 3 seconds respectively. None of the time separations is greater than the default gap, so with standard sessions all events would belong to a single session with added score of 18:
user=desktop, score=4, window=[2019-05-26T13:28:49.122Z..2019-05-26T13:28:59.122Z)
user=mobile, score=18, window=[2019-05-26T13:28:48.582Z..2019-05-26T13:29:12.774Z)
user=desktop, score=10, window=[2019-05-26T13:29:03.367Z..2019-05-26T13:29:13.367Z)
With the new sessions we specify a “gap” attribute of 5 seconds to those events. The third message comes 7 seconds after the second one thus falling into a different session now. The previous large session with score 18 will be split into two 9-point sessions:
user=desktop, score=4, window=[2019-05-26T14:30:22.969Z..2019-05-26T14:30:32.969Z)
user=mobile, score=9, window=[2019-05-26T14:30:22.429Z..2019-05-26T14:30:30.553Z)
user=mobile, score=9, window=[2019-05-26T14:30:33.276Z..2019-05-26T14:30:41.849Z)
user=desktop, score=10, window=[2019-05-26T14:30:37.357Z..2019-05-26T14:30:47.357Z)
Full code here. Tested with Java SDK 2.13.0
Python version:
Analogously, we can extend the same approach to the Python SDK. The code for the Sessions class can be found here. We’ll define a new DynamicSessions class. Inside the assign method we can access the processed record using context.element and modify the gap according to data.
Original:
def assign(self, context):
timestamp = context.timestamp
return [IntervalWindow(timestamp, timestamp + self.gap_size)]
Extended:
def assign(self, context):
timestamp = context.timestamp
try:
gap = Duration.of(context.element[1][“gap”])
except:
gap = self.gap_size
return [IntervalWindow(timestamp, timestamp + gap)]
If the input data contains a gap field it will use it to override the default gap size. In our pipeline code we just need to window events into DynamicSessions instead of the standard Sessions:
'user_session_window' >> beam.WindowInto(DynamicSessions(gap_size=gap_size),
timestamp_combiner=window.TimestampCombiner.OUTPUT_AT_EOW)
With standard sessions the output is as follows:
INFO:root:>> User mobile had 4 events with total score 18 in a 0:00:22 session
INFO:root:>> User desktop had 1 events with total score 4 in a 0:00:10 session
INFO:root:>> User desktop had 1 events with total score 10 in a 0:00:10 session
With our custom windowing mobile events are split into two different sessions:
INFO:root:>> User mobile had 2 events with total score 9 in a 0:00:08 session
INFO:root:>> User mobile had 2 events with total score 9 in a 0:00:07 session
INFO:root:>> User desktop had 1 events with total score 4 in a 0:00:10 session
INFO:root:>> User desktop had 1 events with total score 10 in a 0:00:10 session
All files here. Tested with Python SDK 2.13.0
Related
I am trying to send test USDT to a particular account in Java using the following code:
final Web3j web3 = createWeb3If(ethNetworkUrl);
final Credentials credentials = Credentials.create(privateKey);
final ERC20 usdtContract = ERC20.load(usdtContractAddress, web3, credentials, new TestGasProvider());
usdtContract.transfer(exchangeAddress, BigInteger.valueOf(10)).send();
The last statement results in the following exception:
java.lang.RuntimeException: Error processing transaction request: intrinsic gas too low
at org.web3j.tx.TransactionManager.processResponse(TransactionManager.java:176)
at org.web3j.tx.TransactionManager.executeTransaction(TransactionManager.java:81)
at org.web3j.tx.ManagedTransaction.send(ManagedTransaction.java:128)
at org.web3j.tx.Contract.executeTransaction(Contract.java:367)
at org.web3j.tx.Contract.executeTransaction(Contract.java:350)
at org.web3j.tx.Contract.executeTransaction(Contract.java:344)
at org.web3j.tx.Contract.executeTransaction(Contract.java:339)
at org.web3j.tx.Contract.lambda$executeRemoteCallTransaction$3(Contract.java:410)
at org.web3j.protocol.core.RemoteCall.send(RemoteCall.java:42)
at com.dpisarenko.minimalcryptoexchange.delegates.TransferUsdtToExchangeAccount.execute(TransferUsdtToExchangeAccount.java:57)
TestGasProvider is defined as:
public class TestGasProvider extends StaticGasProvider {
public static final BigInteger GAS_PRICE = BigInteger.valueOf(10L);
public static final BigInteger GAS_LIMIT = BigInteger.valueOf(1L);
public TestGasProvider() {
super(GAS_PRICE, GAS_LIMIT);
}
}
usdtContract was deployed using this script, which calls deploy.js:
async function main() {
const USDT = await ethers.getContractFactory("USDT");
const usdt = await USDT.deploy(1000000000000000);
console.log("USDT contract deployed to:", usdt.address);
}
main()
.then(() => process.exit(0))
.catch((error) => {
console.error(error);
process.exit(1);
});
This contract is running on a local testnet set up as described here.
What do I need to change in any of these components (testnet, contract, deploy scripts, Java code) in order to send any amount of USDT to a particular address (without any errors)?
Update 1: If I change TestGasProvider to
public class TestGasProvider extends StaticGasProvider {
public static final BigInteger GAS_PRICE = BigInteger.valueOf(1L);
public static final BigInteger GAS_LIMIT = BigInteger.valueOf(1000000000L);
public TestGasProvider() {
super(GAS_PRICE, GAS_LIMIT);
}
}
I get another error:
java.lang.RuntimeException: Error processing transaction request: exceeds block gas limit
at org.web3j.tx.TransactionManager.processResponse(TransactionManager.java:176)
at org.web3j.tx.TransactionManager.executeTransaction(TransactionManager.java:81)
at org.web3j.tx.ManagedTransaction.send(ManagedTransaction.java:128)
at org.web3j.tx.Contract.executeTransaction(Contract.java:367)
at org.web3j.tx.Contract.executeTransaction(Contract.java:350)
at org.web3j.tx.Contract.executeTransaction(Contract.java:344)
at org.web3j.tx.Contract.executeTransaction(Contract.java:339)
at org.web3j.tx.Contract.lambda$executeRemoteCallTransaction$3(Contract.java:410)
at org.web3j.protocol.core.RemoteCall.send(RemoteCall.java:42)
at com.dpisarenko.minimalcryptoexchange.delegates.TransferUsdtToExchangeAccount.execute(TransferUsdtToExchangeAccount.java:57)
Update 1
I am looking to submit a set of code changes to the branch i16 of the minimal-crypto-exchange project which passes the following test:
Step 1
Set up the environment as described here.
Step 2
Set a breakpoint on line usdtContract.transfer(exchangeAddress, BigInteger.valueOf(10)).send(); in TransferUsdtToExchangeAccount class:
Step 3
Start the process engine application in debug mode. Its Java main method is located here.
Wait until you see the message starting to acquire jobs in the console output:
11:59:16.031 [JobExecutor[org.camunda.bpm.engine.spring.components.jobexecutor.SpringJobExecutor]] INFO org.camunda.bpm.engine.jobexecutor - ENGINE-14018 JobExecutor[org.camunda.bpm.engine.spring.components.jobexecutor.SpringJobExecutor] starting to acquire jobs
Step 4
Login with the credentials demo/demo at http://localhost:8080.
After login you should see a page like this:
Step 5
Click on the tasklist link. You should see a page that looks like this:
Press the "Start process" link. Following screen will appear:
Click on Send USDT to the exchange account process link. Following dialog box will appear:
Enter an arbitrary value into the "business key" field and press the "Start" button.
Step 6
After a couple of seconds, the breakpoint from step 2 will activate.
The problem will be solved if usdtContract.transfer(exchangeAddress, BigInteger.valueOf(10)).send() is executed without errors.
Notes
You are allowed to modify the amount in usdtContract.transfer(exchangeAddress, BigInteger.valueOf(10)).send(); from 10 to something else.
You can also modify the parameters of the Ethereum testnet specified in docker-compose.yml and genesis.json, as well as those of the USDT smart contract which is deployed using this script.
Your solution must work in this controlled environment (i. e. no faucets must be used).
Update 2
I made following changes:
The set-up tutorial now contains step 7 in which ETH is added to the exchange account.
Now a new version of the ETH testnet is being used, major changes being that log output is more verbose and the gas price is set to 1 (see --miner.gasprice 1 in entrypoint.sh).
Modified the code in TransferUsdtToExchangeAccount so that now USDT is transferred not from the exchange account (which has zero balance), but from the buffer account.
Now I am receiving the error
org.web3j.protocol.exceptions.TransactionException: Transaction 0x4bce379a2673c4564b2eb6080607b00d1a8ac232fbddf903f353f4eeda566cae
has failed with status: 0x0. Gas used: 32767.
Revert reason: 'ERC20: transfer amount exceeds allowance'.
My skills with Ethereum are still not sharp enough to give you a proper answer, but I hope you get some guidance.
The error states that you are trying to transfer by a party A certain quantity in the name of another party B, to a third one C, but the amount you are trying to transfer, using transferFrom, is greater than the one party B approved party A to send.
You can check the actual allowance between to parties using the method with the same name of your contract.
Please, consider review this integration test from the web3j library in Github. It is different than yours but I think it could be helpful.
Especially, it states that the actual transferFrom operation should be performed by the beneficiary of the allowance. Please, see the relevant code:
final String aliceAddress = ALICE.getAddress();
final String bobAddress = BOB.getAddress();
ContractGasProvider contractGasProvider = new DefaultGasProvider();
HumanStandardToken contract =
HumanStandardToken.deploy(
web3j,
ALICE,
contractGasProvider,
aliceQty,
"web3j tokens",
BigInteger.valueOf(18),
"w3j$")
.send();
//...
// set an allowance
assertEquals(contract.allowance(aliceAddress, bobAddress).send(), (BigInteger.ZERO));
transferQuantity = BigInteger.valueOf(50);
TransactionReceipt approveReceipt =
contract.approve(BOB.getAddress(), transferQuantity).send();
HumanStandardToken.ApprovalEventResponse approvalEventValues =
contract.getApprovalEvents(approveReceipt).get(0);
assertEquals(approvalEventValues._owner, (aliceAddress));
assertEquals(approvalEventValues._spender, (bobAddress));
assertEquals(approvalEventValues._value, (transferQuantity));
assertEquals(contract.allowance(aliceAddress, bobAddress).send(), (transferQuantity));
// perform a transfer as Bob
transferQuantity = BigInteger.valueOf(25);
// Bob requires his own contract instance
HumanStandardToken bobsContract =
HumanStandardToken.load(
contract.getContractAddress(), web3j, BOB, STATIC_GAS_PROVIDER);
TransactionReceipt bobTransferReceipt =
bobsContract.transferFrom(aliceAddress, bobAddress, transferQuantity).send();
HumanStandardToken.TransferEventResponse bobTransferEventValues =
contract.getTransferEvents(bobTransferReceipt).get(0);
assertEquals(bobTransferEventValues._from, (aliceAddress));
assertEquals(bobTransferEventValues._to, (bobAddress));
assertEquals(bobTransferEventValues._value, (transferQuantity));
//...
This fact is also indicated in this OpenZeppelin forum post.
I am working an automation for IBM Rational Team Concert (IBM aka Jazz RTC).
How may one list all streams owned by a specific project area?
Which are the required API calls?
I could not find any getters in the IProjectArea instance, nor service or client instances with such methods. And I could not figure out how to use search criteria for this purpose.
The streams owned by a project area may be queried using IWorkspaceSearchCriteria. Because streams are, actually, workspaces of type 'stream'. The API is not quite clear how to specify the owning project area.
Get the IWorkspaceManager from the ITeamRepository, which contains the findWorkspaces method.
You don't need IProjectAreaHandle. Only the project area name.
Create a IWorkspaceSearchCriteria and set kind to IWorkspaceSearchCriteria.STREAMS and set exactOwnerName to the string containing the project area name.
Call IWorkspaceManager.findWorkspaces(...) to get a list of IWorkspaceHandles. The first parameter is the search criteria. Se second parameter is the maximum number of results (which I set to IWorkspaceManager.MAX_QUERY_SIZE, which is 512. The third parameter is the progress monitor, which may be null.
If you need to get stream name, description or other attributes, then you need to call IItemManager.fetchCompleteItems(...) fetch the full IWorkspace instances.
Here is an example in Groovy:
Lit<IComponentHandle> listComponents(String projectAreaName) {
final manager = repositoty.getClientLibrary(IWorkspaceManager) as IWorkspaceManager;
final criteria = IWorkspaceSearchCriteria.FACTORY.newInstance();
criteria.setKind(IWorkspaceSearchCriteria.STREAMS);
criteria.setExactOwnerName(projectAreaName)
final itemManager = repositoty.itemManager()
return itemManager.fetchCompleteItems(handles, IItemManager.DEFAULT, null) as List<IWorkspace>
}
We have 5 custom reports for our 94 districts. A capability grants access to these custom reports.
The issue is that each district should not see the report results from another district.
Currently, the only alternative is to create 5 * 94 = 470 custom reports, granting a set of 5 to each district. However, this is cumbersome when a report needs to be updated.
TaskDefinitions (Reports) create TaskResult objects (the result of a report). In addition, to the TaskResult object, a JasperReport object is created. Neither the TaskResult/JasperResult object "re-executes" BeanShell when you open the task result.
Is there a way to only have 5 reports and scope the results so that only users in that district can see them?
I have an example of how this might be achieved when based on this code below which will look at the scope(s) of the one who is running the report. It will only return identities that are in the same scope as the one that is running the report
// Retrieve Scope of Executor then filter all Identities on that Scope only
import org.apache.commons.logging.Log;
import sailpoint.object.Filter;
import sailpoint.object.Identity;
import sailpoint.object.Scope;
Identity identity = context.getObjectByName(Identity.class, arguments.get("launcher"));
if (identity != null) {
String scopeName = identity.getAssignedScope().getDisplayableName();
List roleFilters = new ArrayList();
if (scopeName!= null) {
roleFilters.add(Filter.eq("identity.assignedScope.name", scopeName));
}
if (!roleFilters.isEmpty()) {
queryOptions.addFilter(Filter.or(roleFilters));
}
} else {
// When Saving with Preview or Execute the Launcher is empty so all results would be shown.
// This filter will prevent that (creates empty report, it works when executed from the My Reports
queryOptions.addFilter(Filter.eq("identity.name", "xxx"));
}
return queryOptions;
The problem with the code sample above:
This will create the report intended for Group A, however Group B, and C will also be able to view it.
So the end goal is to have one report that anyone can run, however only the data that is associated with that scope is visible no matter what user group is involved (scope). So Group B would only be able to view Group B even if Group A ran it.
I think you don't have good options here.
What comes to my mind is to create these reports programatically (writing some script to generate the XML artifact for the TaskDefinition and importing/exporting using IIQDA for example) and maintain them in the same way, so everytime you need to change each one of these hundreds of artifacts, you can just re-generate them via code.
The only thing I'd do in a different way is to use 94 scopes for each 5-set report instead of using capabilities.
I'm writing a simple JavaFX7 application where I display data pulled out of database using a StackedBarChart. I also provide the user with the ability to filter the displayed data sets based on a specific property's value. The problem that I'm facing is that there seems to be some caching issues. Consider the following scenario
Initial load, display everything to the user - no filtering involved.
Say our categories are named 1,2,3,4 and 5, and are rendered in that order (consider them sorted)
The user now selects a filter value. This leads to only categories 1,2,4 and 5 being on the screen (again, in that order - this is the expected behavior)
The user now resets the filter to "do-not-filter".
The expected output of step 3 would be 1,2,3,4 and 5, in that order. However, it is 1,2,4,5,3. Notice the category that got filtered out is added back at the end of the array instead of where it should be.
Things I've tried so far:
Assigning a new ObservableList via Axis.setCategory. this doesn't work.
Same as above, but also force the category list to null before hand.
Sorting the category list. This doesn't work either.
I can't (yet) update to Java 8 - I also can't just leave this as a broken feature because this is expected to roll out to users before we upgrade to Java 8. So JavaFX 8's FilteredList is out of question (and a backport is very much annoying just from looking at the changes to ObservableList). I also don't want to entirely recreate the graph if I can avoid it.
At this point, I'm out of ideas. Any suggestions are welcome. Below is the function that populates the chart.
private void refreshContents() {
this.vaguesTotal.getData().clear();
this.vaguesDone.getData().clear();
this.vaguesPending.getData().clear();
this.xAxis.setCategories(null);
this.chartCategories = FXCollections.observableArrayList();
// Already sorted here
for (VagueLight vagueInfo : context.getVagues()) {
if (this.categoryFilter != null && this.categoryFilter != vagueInfo.getCategory())
continue;
int dossiersTraites = vagueInfo.getNbDossiersTraites();
int dossiersPending = vagueInfo.getNbDossiersATraiter();
String vagueIdentifier = Integer.toString(vagueInfo.getId());
this.vaguesTotal.getData().add(new Data<String, Number>(vagueIdentifier, 0, vagueInfo));
this.vaguesDone.getData().add(new Data<String, Number>(vagueIdentifier, dossiersTraites, vagueInfo));
this.vaguesPending.getData().add(new Data<String, Number>(vagueIdentifier, dossiersPending, vagueInfo));
this.chartCategories.add(vagueIdentifier);
}
// This just sets up event handlers and styles for the series
for (Series<String, Number> dataSeries : this.barChart.getData()) {
for (Data<String, Number> dataNode : dataSeries.getData()) {
initializeDataNode(dataNode, dataSeries);
}
}
// This is where the "bug" happens
this.xAxis.setCategories(this.chartCategories);
layout(true);
}
I see cq:lastModified in the page property which gives me the User who modified the page at the latest. Is there any way to get the list of latest 10 users who modified the page ? Does AEM stores that kind of information at all?
Thanks!
When on the page in CQ, if you open the Information tab in the Sidekick you can view the Audit log — this will show you modification actions on the page, including page activation, e.g.:
I think this stores 15 entries by default (I'm not sure if that number is editable).
Alternatively, you can view the History log under $CQ_HOME/crx-quickstart/logs/history.log — this will show entries for View/Edit/Delete on individual nodes (so for example, you can see that a component was edited rather than just a page).
It can be rotated by date or size as per other CQ logs, & will show:
Timestamp
Action
Node
Node type
For example:
28.07.2014 15:59:05 VIEW admin [/content/dam/geometrixx/travel/train_platform_boarding.jpg] [dam:Asset,mix:versionable]
There is no OOTB way to do this.
But here is how you can try to achieve it :
1) Create Custom workflow with a custom process step.
In this workflow process step copy the cq:lastModifiedBy property value to a new custom property(lets call this lastModifiedUsers, which will be an array)
2) Now create a launcher which runs on modified for cq:PageContent node type. Use this launcher to trigger workflow created in step 1.
Now everytime you modify this page, the launcher will trigger the workflow which will copy the cq:lastModifiedBy property value to this custom property which is an array and save it in the path-path/jcr:content node.
Use AuditLog Interface from com.day.cq.audit package and you can use AuditLog object to invoke getLatestEvents(String[] categories, String path, int max) here specify the max as 10 .
you will receive an array of AuditLogEntry objects and from this array you can get all user id's.