I am using JNA to read Windows event logs. I can get a fair amount of data out of each record but I can't quite get the field names.
To read logs I am doing
EventLogIterator iter = new EventLogIterator("Security");
while(iter.hasNext()) {
EventLogRecord record = iter.next();
System.out.println("Event ID: " + record.getEventId()
+ ", Event Type: " + record.getType()
+ ", Event Source: " + record.getSource());
String strings[] = record.getStrings();
for(String str : strings) {
System.out.println(str);
}
}
I can get data like the id, type, and source easily. Then I can get the list of strings which may be for SubjectUserSid, SubjectUserName, etc.
I've been trying to get the data that I want with the field names. Is there an easy way to extract the field names/headers for each of the strings from record.getStrings()? I noticed there is a byte[] data variable in the record. I have tried to read this but I haven't been able to get any useful information from it. I know I can get the data length and offset for certain variables which I think I could extract the data that I want that way but I was wondering if that was correct or if there was an easier way.
Related
I am writing an application which needs to load a large csv file that is pure data and doesn't contain any headers.
I am using a fastCSV library to parse the file, however the data needs to be stored and specific fields need to be retrieved. Since the entire data is not necessary I am skipping every third line.
Is there a way to set the headers after the file has been parsed and save it in a data structure such as an ArrayList?
Here is the function which loads the file:
public void fastCsv(String filePath) {
File file = new File(filePath);
CsvReader csvReader = new CsvReader();
int linecounter = 1;
try (CsvParser csvParser = csvReader.parse(file, StandardCharsets.UTF_8)) {
CsvRow row;
while ((row = csvParser.nextRow()) != null) {
if ((linecounter % 3) > 0 ) {
// System.out.println("Read line: " + row);
//System.out.println("First column of line: " + row.getField(0));
System.out.println(row);
}
linecounter ++;
}
System.out.println("Execution Time in ms: " + elapsedTime);
csvParser.close();
} catch (IOException e) {
e.printStackTrace();
}
}
Any insight would be greatly appreciated.
univocity-parsers supports field selection and can do this very easily. It's also faster than the library you are using.
Here's how you can use it to select columns of interest:
Input
String input = "X, X2, Symbol, Date, Open, High, Low, Close, Volume\n" +
" 5, 9, AAPL, 01-Jan-2015, 110.38, 110.38, 110.38, 110.38, 0\n" +
" 2710, 289, AAPL, 01-Jan-2015, 110.38, 110.38, 110.38, 110.38, 0\n" +
" 5415, 6500, AAPL, 02-Jan-2015, 111.39, 111.44, 107.35, 109.33, 53204600";
Configure
CsvParserSettings settings = new CsvParserSettings(); //many options here, check the tutorial
settings.setHeaderExtractionEnabled(true); //tells the parser to use the first row as the header row
settings.selectFields("X", "X2"); //selects the fields
Parse and print results
CsvParser parser = new CsvParser(settings);
for(String[] row : parser.iterate(new StringReader(input))){
System.out.println(Arrays.toString(row));
}
}
Output
[5, 9]
[2710, 289]
[5415, 6500]
On the field selection, you can use any sequence of fields, and have rows with different column sizes, and the parser will handle this just fine. No need to write complex logic to handle that.
The process the File in your code, change the example above to do this:
for(String[] row : parser.iterate(new File(filePath))){
... //your logic goes here.
}
If you want a more usable record (with typed values), use this instead:
for(Record record : parser.iterateRecords(new File(filePath))){
... //your logic goes here.
}
Speeding up
The fastest way of processing the file is through a RowProcessor. That's a callback that received the rows parsed from the input:
settings.setProcessor(new AbstractRowProcessor() {
#Override
public void rowProcessed(String[] row, ParsingContext context) {
System.out.println(Arrays.toString(row));
context.skipLines(3); //use the context object to control the parser
}
});
CsvParser parser = new CsvParser(settings);
//`parse` doesn't return anything. Rows go to the `rowProcessed` method.
parser.parse(new StringReader(input));
You should be able to parse very large files pretty quickly. If things are slowing down look in your code (avoid adding values to lists or collections in memory, or at least pre-allocate the collections to a good size, and give the JVM a large amount of memory to work with using Xms and Xmx flags).
Right now this parser is the fastest you can find. I made this performance comparison a while ago you can use for reference.
Hope this helps
Disclosure: I am the author of this library. It's open-source and free (Apache V2.0 license)
Do you know which fields/columns you want to keep, and what you'd like the "header" value to be ? , ie you want columns the first and third columns and you want them called "first" and "third" ? If so, you could build a HashMap of string/objects (or other appropriate type, depends on your actual data and needs), and add the HashMap to an ArrayList - this should get you going, just be sure to change the HashMap types as needed
ArrayList<HashMap<String,String>> arr=new ArrayList<>();
HashMap<String,String> hm=new HashMap<>();
while ((row = csvParser.nextRow()) != null) {
if ((linecounter % 3) > 0 ) {
// System.out.println("Read line: " + row);
//System.out.println("First column of line: " + row.getField(0));
// keep col1 and col3
hm.clear();
hm.put("first",row.getField(0));
hm.put("third",row.getField(2));
arr.add(hm);
}
linecounter ++;
}
If you want to capture all columns, you can use a similar technique but I'd build a mapping data structure so that you can match field indexes to column header names in a loop to add each column to the HashMap that is then stored in the ArrayList
I need to implement the method
void filter(Reader mails, Reader groups, Writer users) throws IOException
in such a way that it would combine two pieces of data frow readers into one writer.
The file for mails would look like this:
Login;Email
login1;mail1#mail.com
login2;mail2#mail.com
login3;mail3#mail.com
login4;mail4#mail.com
and the file for groups would look like this:
Login;Group;
login1;Group1
login2;Group2
login3;Group3
login4;Group2
And the result of merging should look like this:
Login;Email;Group
login1;mail1#mail.com;Group1
login2;mail2#mail.com;Group2
login3;mail3#mail.com;Group3
login4;mail4#mail.com;Group2
So, what I came up with is: get a string from the first reader, then get another string from the second reader, manipulate them as needed and then write the result with writer.
But is there a way to make it differently or just more elegant?
PS: I'm obliged to use only Reader and Writer classes.
BTW: when I write something to a file with Writer and if I look into the file, I'll see something unreadable. But if I read the same file with Reader and then print it on the console, it looks ok. Is it normal? Or how can I write to the file to make it readable?
How about using a Map and a POJO container.
Pojo is
String email;
String group;
Then you have a hashmap
Map<String,EmailGroup> emailGroup = new HashMap<String,EmailGroup>();
Then your reading code will read the email list then populate the group after.
readEmail(emailGroup);
readGroup(emailGroup);
readEmail(Map<String,EmailGroup> map)
{
EmailGroup tempgroup;
if(map.contains(login))
{
tempGroup = map.get();
}
else
{
EmailGroup tempGroup = new EmailGroup();
}
tempGroup.setEmail(readEmailAddress);
map.put(login,tempGroup);
}
The readGroup does the same but calls setGroup();
This is not a full solution but should provide a suggestion on another possible way to resolve this issue.
If you want to implement a method with this signature, you could do this:
public static void main(String[] args) throws Exception {
String mails = "Login;Email\n"
+ "login1;mail1#mail.com\n"
+ "login2;mail2#mail.com\n"
+ "login3;mail3#mail.com\n"
+ "login4;mail4#mail.com";
String groups = "Login;Group;\n"
+ "login1;Group1\n"
+ "login2;Group2\n"
+ "login3;Group3\n"
+ "login4;Group2";
Reader mailsReader = new StringReader(mails);
Reader groupsReader = new StringReader(groups);
Writer mergedWriter = new StringWriter();
filter(mailsReader, groupsReader, mergedWriter);
System.out.println(mergedWriter.toString());
}
static void filter(Reader mails, Reader groups, Writer users) throws IOException {
BufferedReader mbr = new BufferedReader(mails);
BufferedReader gbr = new BufferedReader(groups);
BufferedWriter ubw = new BufferedWriter(users);
String ml = mbr.readLine();
String gl = gbr.readLine();
while (ml != null && gl != null) {
ubw.write(ml + ";" + gl.split(";")[1] + "\n");
ml = mbr.readLine();
gl = gbr.readLine();
}
ubw.flush();
}
Output:
Login;Email;Group
login1;mail1#mail.com;Group1
login2;mail2#mail.com;Group2
login3;mail3#mail.com;Group3
login4;mail4#mail.com;Group2
Looks like a join on the common key Login.
This should be easy to solve with some maps and some POJOs.
In the following some additional information that should help clarifying my thoughts.
Let's consider following data table. It contains people's first and last names, their email addresses and logins.
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
+ Login + Email + First name + Last name +
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
+ smith + smith#mail.com + John + Smith +
+ miller + miller#mail.com + John + Miller +
+ jackson + mail#jackson.com + Scott + Jackson +
+ scott + me#scott.com + Scott + Jackson +
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
This information is enough for simple data inquiries. If we want to know what John Smith's email address is, we have to perform a search in the first table. If we want to know Scott Jackson's address, we have a problem, since there are two people with the exact same name in our database.
So a mean to differentiate people is necessary, that is the column Login. It is unique in this table and thus can be used to avoid ambiguities.
Because Login has this attribute, it is called key.
This next table contains the group affiliation per login.
++++++++++++++++++++++++++++++++++++++++
+ Login + Group +
++++++++++++++++++++++++++++++++++++++++
+ smith + user +
+ miller + user +
+ jackson + admin +
+ scott + admin +
++++++++++++++++++++++++++++++++++++++++
Above table has Login as key, too. Using that property allows us to make another type of inquiry. It is possible to ask for group affiliation of persons, in order to do so, we have to retrieve the login of the user in question. Therefore the first table is used. Since Login is a key in the second table, it can be used to get the group affiliation.
This process is called joining, we combined the group affiliation from the second table, with the information on first and second names from the first table and used the login info as key.
A natural join performs this operation on all rows. At this point, I hope it is clear why, I proposed a join. The 2 files correspond to 2 tables (that are invariant to changes in the order of the data). Joining them results in a third table that contains all the information. Printing that table is an answer to OPs question.
To utilize the power of the underlying relational algebra a simple java.util.Map with Login as key can be used.
I've spent several frustrating days on this now and would appreciate some help. I have a Java agent in Lotus Domino 8.5.3 which is activated by a cgi:POST from my Lotusscript validation agent which is checking that customer has filled in the Billing and delivery address form. This is the code that parses the incoming data into a HashMap where field names are mapped to their respective values.
HashMap hmParam = new HashMap(); //Our Hashmap for request_content data
//Grab transaction parameters from form that called agent (CGI: request_content)
if (contentDecoded != null) {
String[] arrParam = contentDecoded.split("&");
for(int i=0; i < arrParam.length; i++) {
int n = arrParam[i].indexOf("=");
String paramName = arrParam[i].substring(0, n);
String paramValue = arrParam[i].substring(n + 1, arrParam[i].length());
hmParam.put(paramName, paramValue); //Old HashMap
if (paramName.equalsIgnoreCase("transaction_id")) {
transactionID = paramValue;
description = "Order " + transactionID + " from Fareham Wine Cellar";
//System.out.println("OrderID = " + transactionID);
}
if (paramName.equalsIgnoreCase("amount")) {
orderTotal = paramValue;
}
if (paramName.equalsIgnoreCase("deliveryCharge")) {
shipping = paramValue;
}
}
}
The block of code above dates back over a year to my original integration of shopping cart to Barclays EPDQ payment gateway. In that agent I recover the specific values and build a form that is then submitted to EPDQ CPI later on in the agent like this;
out.print("<input type=\"hidden\" name=\"shipping\" value=\"");
out.println(hmParam.get("shipping") + "\">");
I want to do exactly the same thing here, except when I try the agent crashes with a null pointer exception. I can successfully iterate through the hashMap with the snippet below, so I know the data is present, but I can't understand why I can't use myHashMap.Get(key) to get each field value in the order I want them for the html form. The original agent in another application is still in use so what is going on? The data too is essentially unchanged String fieldnames mapped to String values.
Iterator it = cgiData.entrySet().iterator();
while (it.hasNext()) {
Map.Entry pairs = (Map.Entry)it.next();
out.println("<br />" + pairs.getKey() + " = " + pairs.getValue());
//System.out.println(pairs.getKey() + " = " + pairs.getValue());
}
I did two things that may have had an impact, in the process of trying to debug what was going on I needed these further imports;
import java.util.Iterator;
import java.util.Map;
Although I'm not iterating over the hashMap, I've left them in in case which gives me the option of dumping the hashMap out to my system audit trail when application is in production. In variations of the snippet below after it started working I was able to get to any of the data I needed, even if the value was Null, and toString() also seemed to be optional again, as it made no difference to the output.
String cgiValue = "";
cgiValue = hmParam.get("ship_to_lastname").toString();
out.println("<br />Lastname: " + cgiValue);
out.println("<br />Company name: " + hmParam.get("bill_to_company"));
out.println("<br />First name: " + hmParam.get("ship_to_firstname"));
The second thing I did, while trying to get code to work was I enabled the option "Compile Java code with debugging information" for the agent, this may have done something to the way the project was built within the Domino Developer client.
I think I have to put this down to some sort of internal error created when Domino Designer compiled the code. I had a major crash last night while working on this which necessitated a cold boot of my laptop. You also may find that when using Domino Designer 8.5.x that strange things can happen if you don't completely close down all the tasks from time to time with KillNotes
Using the "Network Updates API" example at the following link I am able to post network updates with no problem using client.postNetworkUpdate(updateText).
http://code.google.com/p/linkedin-j/wiki/GettingStarted
So posting works great.. However posting an update does not return an "UpdateKey" which is used to retrieve stats for post itself such as comments, likes, etc. Without the UpdateKey I cannot retrieve stats. So what I would like to do is post, then retrieve the last post using the getNetworkUpdates() function, and in that retrieval will be the UpdateKey that I need to use later to retrieve stats. Here's a sample script in Java on how to get network updates, but I need to do this in Coldfusion instead of Java.
Network network = client.getNetworkUpdates(EnumSet.of(NetworkUpdateType.STATUS_UPDATE));
System.out.println("Total updates fetched:" + network.getUpdates().getTotal());
for (Update update : network.getUpdates().getUpdateList()) {
System.out.println("-------------------------------");
System.out.println(update.getUpdateKey() + ":" + update.getUpdateContent().getPerson().getFirstName() + " " + update.getUpdateContent().getPerson().getLastName() + "->" + update.getUpdateContent().getPerson().getCurrentStatus());
if (update.getUpdateComments() != null) {
System.out.println("Total comments fetched:" + update.getUpdateComments().getTotal());
for (UpdateComment comment : update.getUpdateComments().getUpdateCommentList()) {
System.out.println(comment.getPerson().getFirstName() + " " + comment.getPerson().getLastName() + "->" + comment.getComment());
}
}
}
Anyone have any thoughts on how to accomplish this using Coldfusion?
Thanks
I have not used that api, but I am guessing you could use the first two lines to grab the number of updates. Then use the overloaded client.getNetworkUpdates(start, end) method to retrieve the last update and obtain its key.
Totally untested, but something along these lines:
<cfscript>
...
// not sure about accessing the STATUS_UPDATE enum. One of these should work:
// method 1
STATUS_UPDATE = createObject("java", "com.google.code.linkedinapi.client.enumeration.NetworkUpdateType$STATUS_UPDATE");
// method 2
NetworkUpdateType = createObject("java", "com.google.code.linkedinapi.client.enumeration.NetworkUpdateType");
STATUS_UPDATE = NetworkUpdateType.valueOf("STATUS_UPDATE");
enumSet = createObject("java", "java.util.EnumSet");
network = yourClientObject.getNetworkUpdates(enumSet.of(STATUS_UPDATE));
numOfUpdates = network.getUpdates().getTotal();
// Add error handling in case numOfUpdates = 0
result = yourClientObject.getNetworkUpdates(numOfUpdates, numOfUpdates);
lastUpdate = result.getUpdates().getUpdateList().get(0);
key = lastUpdate.getUpdateKey();
</cfscript>
You can also use socialauth library to retrieve updates and post status on linkedin.
http://code.google.com/p/socialauth
Calling All DWR Gurus!
I am currently using reverse Ajax to add data to a table in a web page dynamically.
When I run the following method:
public static void addRows(String tableBdId, String[][] data) {
Util dwrUtil = new Util(getSessionForPage()); // Get all page sessions
dwrUtil.addRows(tableBdId, data);
}
The new row gets created in my web page as required.
However, in order to update these newly created values later on the tags need to have an element ID for me to access.
I have had a look at the DWR javadoc and you can specify some additional options see http://directwebremoting.org/dwr/browser/addRows , but this makes little sense to me, the documentation is very sparse.
If anyone could give me a clue as to how I could specify the element id's for the created td elements I would be most grateful. Alternatively if anyone knows of an alternative approach I would be keen to know.
Kind Regards
Karl
The closest I could get was to pass in some arguments to give the element an id. See below:
public static void addRows(String tableBdId, String[] data, String rowId) {
Util dwrUtil = new Util(getSessionForPage()); // Get all page sessions
// Create the options, which is needed to add a row ID
String options = "{" +
" rowCreator:function(options) {" +
" var row = document.createElement(\"tr\");" +
" row.setAttribute('id','" + rowId + "'); " +
" return row;" +
" }," +
" cellCreator:function(options) {" +
" var td = document.createElement(\"td\");" +
" return td;" +
" }," +
" escapeHtml:true\"}";
// Wrap the supplied row into an array to match the API
String[][] args1 = new String[][] { data };
dwrUtil.addRows(tableBdId, args1, options);
Is this line of your code really working??
dwrUtil.addRows(tableBdId, data);
The DWR addRows method needs at least 3 parameters of 4 to work, they are:
id: The id of the table element (preferably a tbody element);
array: Array (or object from DWR 1.1) containing one entry for each row in the updated table;
cellfuncs: An array of functions (one per column) for extracting cell data from the passed row data;
options: An object containing various options.
The id, array and cellfuncs are required, and in your case, you'll have to pass the options also because you want to customize the row creation and set the TD id's. check it out:
Inside the options argument, you need to use one parameter called "cellCreator" to inform your own way to create the td html element.
Check it out:
// Use the cellFuncs var to set the values you want to display inside the table rows
// the syntax is object.property
// use one function(data) for each property you need to show on your table.
var cellFuncs = [
function(data) { return data.name_of_the_first_object_property ,
function(data) { return data.name_of_the_second_object_property; }
];
DWRUtil.addRows(
tableBdId,
data,
cellFuncs,
{
// This function is used for you customize the generated td element
cellCreator:function(options) {
var td = document.createElement("td");
// setting the td element id using the rowIndex
// just implement your own id bellow
td.id = options.rowIndex;
return td;
}
});