"merge" two readers into one writer - java

I need to implement the method
void filter(Reader mails, Reader groups, Writer users) throws IOException
in such a way that it would combine two pieces of data frow readers into one writer.
The file for mails would look like this:
Login;Email
login1;mail1#mail.com
login2;mail2#mail.com
login3;mail3#mail.com
login4;mail4#mail.com
and the file for groups would look like this:
Login;Group;
login1;Group1
login2;Group2
login3;Group3
login4;Group2
And the result of merging should look like this:
Login;Email;Group
login1;mail1#mail.com;Group1
login2;mail2#mail.com;Group2
login3;mail3#mail.com;Group3
login4;mail4#mail.com;Group2
So, what I came up with is: get a string from the first reader, then get another string from the second reader, manipulate them as needed and then write the result with writer.
But is there a way to make it differently or just more elegant?
PS: I'm obliged to use only Reader and Writer classes.
BTW: when I write something to a file with Writer and if I look into the file, I'll see something unreadable. But if I read the same file with Reader and then print it on the console, it looks ok. Is it normal? Or how can I write to the file to make it readable?

How about using a Map and a POJO container.
Pojo is
String email;
String group;
Then you have a hashmap
Map<String,EmailGroup> emailGroup = new HashMap<String,EmailGroup>();
Then your reading code will read the email list then populate the group after.
readEmail(emailGroup);
readGroup(emailGroup);
readEmail(Map<String,EmailGroup> map)
{
EmailGroup tempgroup;
if(map.contains(login))
{
tempGroup = map.get();
}
else
{
EmailGroup tempGroup = new EmailGroup();
}
tempGroup.setEmail(readEmailAddress);
map.put(login,tempGroup);
}
The readGroup does the same but calls setGroup();
This is not a full solution but should provide a suggestion on another possible way to resolve this issue.

If you want to implement a method with this signature, you could do this:
public static void main(String[] args) throws Exception {
String mails = "Login;Email\n"
+ "login1;mail1#mail.com\n"
+ "login2;mail2#mail.com\n"
+ "login3;mail3#mail.com\n"
+ "login4;mail4#mail.com";
String groups = "Login;Group;\n"
+ "login1;Group1\n"
+ "login2;Group2\n"
+ "login3;Group3\n"
+ "login4;Group2";
Reader mailsReader = new StringReader(mails);
Reader groupsReader = new StringReader(groups);
Writer mergedWriter = new StringWriter();
filter(mailsReader, groupsReader, mergedWriter);
System.out.println(mergedWriter.toString());
}
static void filter(Reader mails, Reader groups, Writer users) throws IOException {
BufferedReader mbr = new BufferedReader(mails);
BufferedReader gbr = new BufferedReader(groups);
BufferedWriter ubw = new BufferedWriter(users);
String ml = mbr.readLine();
String gl = gbr.readLine();
while (ml != null && gl != null) {
ubw.write(ml + ";" + gl.split(";")[1] + "\n");
ml = mbr.readLine();
gl = gbr.readLine();
}
ubw.flush();
}
Output:
Login;Email;Group
login1;mail1#mail.com;Group1
login2;mail2#mail.com;Group2
login3;mail3#mail.com;Group3
login4;mail4#mail.com;Group2

Looks like a join on the common key Login.
This should be easy to solve with some maps and some POJOs.
In the following some additional information that should help clarifying my thoughts.
Let's consider following data table. It contains people's first and last names, their email addresses and logins.
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
+ Login + Email + First name + Last name +
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
+ smith + smith#mail.com + John + Smith +
+ miller + miller#mail.com + John + Miller +
+ jackson + mail#jackson.com + Scott + Jackson +
+ scott + me#scott.com + Scott + Jackson +
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
This information is enough for simple data inquiries. If we want to know what John Smith's email address is, we have to perform a search in the first table. If we want to know Scott Jackson's address, we have a problem, since there are two people with the exact same name in our database.
So a mean to differentiate people is necessary, that is the column Login. It is unique in this table and thus can be used to avoid ambiguities.
Because Login has this attribute, it is called key.
This next table contains the group affiliation per login.
++++++++++++++++++++++++++++++++++++++++
+ Login + Group +
++++++++++++++++++++++++++++++++++++++++
+ smith + user +
+ miller + user +
+ jackson + admin +
+ scott + admin +
++++++++++++++++++++++++++++++++++++++++
Above table has Login as key, too. Using that property allows us to make another type of inquiry. It is possible to ask for group affiliation of persons, in order to do so, we have to retrieve the login of the user in question. Therefore the first table is used. Since Login is a key in the second table, it can be used to get the group affiliation.
This process is called joining, we combined the group affiliation from the second table, with the information on first and second names from the first table and used the login info as key.
A natural join performs this operation on all rows. At this point, I hope it is clear why, I proposed a join. The 2 files correspond to 2 tables (that are invariant to changes in the order of the data). Joining them results in a third table that contains all the information. Printing that table is an answer to OPs question.
To utilize the power of the underlying relational algebra a simple java.util.Map with Login as key can be used.

Related

Watson Natural Language Understanding Java Example

Does anyone have an example of making a call to Watson Natural Language Understanding using Java ? The API docs only show Node. However there is a class in the SDK to support it - but no documentation on how to construct the required 'Features' 'AnalyzeOptions' or 'Builder' input.
Here's a snippet that throws a 'Features cannot be Null' - I'm just fumbling in the dark at this point
String response = docConversionService.convertDocumentToHTML(doc).execute();
Builder b = new AnalyzeOptions.Builder();
b.html(response);
AnalyzeOptions ao = b.build();
nlu.analyze(ao);
Until the API reference is published, have you tried looking at the tests on github? See here for NaturalLanguageUnderstandingIT
I've gotten it working with a text string, and looking at the above test, it won't be too much to get it working with a URL or HTML (changing the AnalyzeOptions builder call from text() to html() for example).
Code example:
final NaturalLanguageUnderstanding understanding =
new NaturalLanguageUnderstanding(
NaturalLanguageUnderstanding.VERSION_DATE_2017_02_27);
understanding.setUsernameAndPassword(serviceUsername, servicePassword);
understanding.setEndPoint(url);
understanding.setDefaultHeaders(getDefaultHeaders());
final String testString =
"In remote corners of the world, citizens are demanding respect"
+ " for the dignity of all people no matter their gender, or race, or religion, or disability,"
+ " or sexual orientation, and those who deny others dignity are subject to public reproach."
+ " An explosion of social media has given ordinary people more ways to express themselves,"
+ " and has raised people's expectations for those of us in power. Indeed, our international"
+ " order has been so successful that we take it as a given that great powers no longer"
+ " fight world wars; that the end of the Cold War lifted the shadow of nuclear Armageddon;"
+ " that the battlefields of Europe have been replaced by peaceful union; that China and India"
+ " remain on a path of remarkable growth.";
final ConceptsOptions concepts =
new ConceptsOptions.Builder().limit(5).build();
final Features features =
new Features.Builder().concepts(concepts).build();
final AnalyzeOptions parameters = new AnalyzeOptions.Builder()
.text(testString).features(features).returnAnalyzedText(true).build();
final AnalysisResults results =
understanding.analyze(parameters).execute();
System.out.println(results);
Make sure you populate your NLU service with default headers (setDefaultHeaders()). I pulled these from WatsonServiceTest (I'd post the link but my rep is too low. Just use the FindFile option on WDC github)
final Map<String, String> headers = new HashMap<String, String>();
headers.put(HttpHeaders.X_WATSON_LEARNING_OPT_OUT, String.valueOf(true));
headers.put(HttpHeaders.X_WATSON_TEST, String.valueOf(true));
return headers;

How do I read Windows Event Log field names using JNA?

I am using JNA to read Windows event logs. I can get a fair amount of data out of each record but I can't quite get the field names.
To read logs I am doing
EventLogIterator iter = new EventLogIterator("Security");
while(iter.hasNext()) {
EventLogRecord record = iter.next();
System.out.println("Event ID: " + record.getEventId()
+ ", Event Type: " + record.getType()
+ ", Event Source: " + record.getSource());
String strings[] = record.getStrings();
for(String str : strings) {
System.out.println(str);
}
}
I can get data like the id, type, and source easily. Then I can get the list of strings which may be for SubjectUserSid, SubjectUserName, etc.
I've been trying to get the data that I want with the field names. Is there an easy way to extract the field names/headers for each of the strings from record.getStrings()? I noticed there is a byte[] data variable in the record. I have tried to read this but I haven't been able to get any useful information from it. I know I can get the data length and offset for certain variables which I think I could extract the data that I want that way but I was wondering if that was correct or if there was an easier way.

Finding product in category

I need to find products in different categories on eBay. But when I use the tutorial code
ebay.apis.eblbasecomponents.FindProductsRequestType request = new ebay.apis.eblbasecomponents.FindProductsRequestType();
request.setCategoryID("Art");
request.setQueryKeywords("furniture");
I get the following error: QueryKeywords, CategoryID and ProductID cannot be used together.
So how is this done?
EDIT: the tutorial code is here.
EDIT2: the link to the tutorial code died, apparently. I've continued to search and the category cannot be used with the keyword search, but there's a Domain that you could presumably add to the request, but sadly it's not in the API - so I'm not sure if indeed it can be done.
The less-than-great eBay API doc is here.
This is my full request:
Shopping service = new ebay.apis.eblbasecomponents.Shopping();
ShoppingInterface port = service.getShopping();
bp = (BindingProvider) port;
bp.getRequestContext().put(BindingProvider.ENDPOINT_ADDRESS_PROPERTY, endpointURL);
// Add the logging handler
List<Handler> handlerList = bp.getBinding().getHandlerChain();
if (handlerList == null) {
handlerList = new ArrayList<Handler>();
}
LoggingHandler loggingHandler = new LoggingHandler();
handlerList.add(loggingHandler);
bp.getBinding().setHandlerChain(handlerList);
Map<String,Object> requestProperties = bp.getRequestContext();
Map<String, List<String>> httpHeaders = new HashMap<String, List<String>>();
requestProperties.put(BindingProvider.ENDPOINT_ADDRESS_PROPERTY, endpointURL);
httpHeaders.put("X-EBAY-API-CALL-NAME", Collections.singletonList(CALLNAME));
httpHeaders.put("X-EBAY-API-APP-ID", Collections.singletonList(APPID));
httpHeaders.put("X-EBAY-API-VERSION", Collections.singletonList(VERSION));
requestProperties.put(MessageContext.HTTP_REQUEST_HEADERS, httpHeaders);
// initialize WS operation arguments here
FindProductsRequestType request = new FindProductsRequestType();
request.setAvailableItemsOnly(true);
request.setHideDuplicateItems(true);
request.setMaxEntries(2);
request.setPageNumber(1);
request.setQueryKeywords("Postcard");
request.setDomain("");
The last line, which should set the domain like I need to, does not compile. Any idea how to solve this?
EDIT 3: I gave up on the Java API and I'm doing direct REST. The categories on eBay are actually domains now, and the URL looks like this:
String findProducts = "http://open.api.ebay.com/shopping?callname=FindProducts&responseencoding=XML&appid=" + APPID
+ "&siteid=0&version=525&"
+ "&AvailableItemsOnly=true"
+ "&QueryKeywords=" + keywords
+ "&MaxEntries=10"
+ "&DomainName=" + domainName;
This works, but you want to hear a joke? It seems like not all the domains are listed here and so it doesn't really solve this problem. Pretty disappointing work by eBay.
The solution for finding items based on keywords, in a category, is to use findItemsAdvanced. Could have saved me a lot of time if the docs for FindProducts stated this, instead of just saying that you can use either keyword search OR category search.
This is the API URL:
http://open.api.ebay.com/shoppingcallname=findItemsAdvanced&responseencoding=XML&appid=" + APPID
+ "&siteid=0&version=525&"
+ "&AvailableItemsOnly=true"
+ "&QueryKeywords=" + keywords
+ "&categoryId=" + categoryId
+ "&MaxEntries=50
For completion, if you want to get a list of all the top categories you can use this:
http://open.api.ebay.com/Shopping?callname=GetCategoryInfo&appid=" + APPID + "&siteid=0&CategoryID=-1&version=729&IncludeSelector=ChildCategories

java.lang.NullPointerException trying to get specific values from hashmap

I've spent several frustrating days on this now and would appreciate some help. I have a Java agent in Lotus Domino 8.5.3 which is activated by a cgi:POST from my Lotusscript validation agent which is checking that customer has filled in the Billing and delivery address form. This is the code that parses the incoming data into a HashMap where field names are mapped to their respective values.
HashMap hmParam = new HashMap(); //Our Hashmap for request_content data
//Grab transaction parameters from form that called agent (CGI: request_content)
if (contentDecoded != null) {
String[] arrParam = contentDecoded.split("&");
for(int i=0; i < arrParam.length; i++) {
int n = arrParam[i].indexOf("=");
String paramName = arrParam[i].substring(0, n);
String paramValue = arrParam[i].substring(n + 1, arrParam[i].length());
hmParam.put(paramName, paramValue); //Old HashMap
if (paramName.equalsIgnoreCase("transaction_id")) {
transactionID = paramValue;
description = "Order " + transactionID + " from Fareham Wine Cellar";
//System.out.println("OrderID = " + transactionID);
}
if (paramName.equalsIgnoreCase("amount")) {
orderTotal = paramValue;
}
if (paramName.equalsIgnoreCase("deliveryCharge")) {
shipping = paramValue;
}
}
}
The block of code above dates back over a year to my original integration of shopping cart to Barclays EPDQ payment gateway. In that agent I recover the specific values and build a form that is then submitted to EPDQ CPI later on in the agent like this;
out.print("<input type=\"hidden\" name=\"shipping\" value=\"");
out.println(hmParam.get("shipping") + "\">");
I want to do exactly the same thing here, except when I try the agent crashes with a null pointer exception. I can successfully iterate through the hashMap with the snippet below, so I know the data is present, but I can't understand why I can't use myHashMap.Get(key) to get each field value in the order I want them for the html form. The original agent in another application is still in use so what is going on? The data too is essentially unchanged String fieldnames mapped to String values.
Iterator it = cgiData.entrySet().iterator();
while (it.hasNext()) {
Map.Entry pairs = (Map.Entry)it.next();
out.println("<br />" + pairs.getKey() + " = " + pairs.getValue());
//System.out.println(pairs.getKey() + " = " + pairs.getValue());
}
I did two things that may have had an impact, in the process of trying to debug what was going on I needed these further imports;
import java.util.Iterator;
import java.util.Map;
Although I'm not iterating over the hashMap, I've left them in in case which gives me the option of dumping the hashMap out to my system audit trail when application is in production. In variations of the snippet below after it started working I was able to get to any of the data I needed, even if the value was Null, and toString() also seemed to be optional again, as it made no difference to the output.
String cgiValue = "";
cgiValue = hmParam.get("ship_to_lastname").toString();
out.println("<br />Lastname: " + cgiValue);
out.println("<br />Company name: " + hmParam.get("bill_to_company"));
out.println("<br />First name: " + hmParam.get("ship_to_firstname"));
The second thing I did, while trying to get code to work was I enabled the option "Compile Java code with debugging information" for the agent, this may have done something to the way the project was built within the Domino Developer client.
I think I have to put this down to some sort of internal error created when Domino Designer compiled the code. I had a major crash last night while working on this which necessitated a cold boot of my laptop. You also may find that when using Domino Designer 8.5.x that strange things can happen if you don't completely close down all the tasks from time to time with KillNotes

Obtain a share UpdateKey from LinkedIn using LinkedIn J and getNetworkUpdates() with Coldfusion

Using the "Network Updates API" example at the following link I am able to post network updates with no problem using client.postNetworkUpdate(updateText).
http://code.google.com/p/linkedin-j/wiki/GettingStarted
So posting works great.. However posting an update does not return an "UpdateKey" which is used to retrieve stats for post itself such as comments, likes, etc. Without the UpdateKey I cannot retrieve stats. So what I would like to do is post, then retrieve the last post using the getNetworkUpdates() function, and in that retrieval will be the UpdateKey that I need to use later to retrieve stats. Here's a sample script in Java on how to get network updates, but I need to do this in Coldfusion instead of Java.
Network network = client.getNetworkUpdates(EnumSet.of(NetworkUpdateType.STATUS_UPDATE));
System.out.println("Total updates fetched:" + network.getUpdates().getTotal());
for (Update update : network.getUpdates().getUpdateList()) {
System.out.println("-------------------------------");
System.out.println(update.getUpdateKey() + ":" + update.getUpdateContent().getPerson().getFirstName() + " " + update.getUpdateContent().getPerson().getLastName() + "->" + update.getUpdateContent().getPerson().getCurrentStatus());
if (update.getUpdateComments() != null) {
System.out.println("Total comments fetched:" + update.getUpdateComments().getTotal());
for (UpdateComment comment : update.getUpdateComments().getUpdateCommentList()) {
System.out.println(comment.getPerson().getFirstName() + " " + comment.getPerson().getLastName() + "->" + comment.getComment());
}
}
}
Anyone have any thoughts on how to accomplish this using Coldfusion?
Thanks
I have not used that api, but I am guessing you could use the first two lines to grab the number of updates. Then use the overloaded client.getNetworkUpdates(start, end) method to retrieve the last update and obtain its key.
Totally untested, but something along these lines:
<cfscript>
...
// not sure about accessing the STATUS_UPDATE enum. One of these should work:
// method 1
STATUS_UPDATE = createObject("java", "com.google.code.linkedinapi.client.enumeration.NetworkUpdateType$STATUS_UPDATE");
// method 2
NetworkUpdateType = createObject("java", "com.google.code.linkedinapi.client.enumeration.NetworkUpdateType");
STATUS_UPDATE = NetworkUpdateType.valueOf("STATUS_UPDATE");
enumSet = createObject("java", "java.util.EnumSet");
network = yourClientObject.getNetworkUpdates(enumSet.of(STATUS_UPDATE));
numOfUpdates = network.getUpdates().getTotal();
// Add error handling in case numOfUpdates = 0
result = yourClientObject.getNetworkUpdates(numOfUpdates, numOfUpdates);
lastUpdate = result.getUpdates().getUpdateList().get(0);
key = lastUpdate.getUpdateKey();
</cfscript>
You can also use socialauth library to retrieve updates and post status on linkedin.
http://code.google.com/p/socialauth

Categories

Resources