I have a properties file with all the fields. dynamically I need to draw the text box with fields as read from the properties file and enter values and post it to controller in spring - java !
Example Properties File
name=String
age=int
address=string
How can I do this from java code..
For my idea, I will do it as below:
Using ajax to get fields from property file on server and return a list of field and type of field in a json format (key, value).
Now we have the data of those fields, then we generate them to your form using jquery or javascript.
Submit the form to server to get value.
Step 1 and 2 are quite easy, so I do not post the code; for step 3, you can try the method below to parse the params in query string to a map.
public static Map getMapFromQueryString(String queryString) {
Map returnMap = new HashMap();
StringTokenizer stringTokenizer = new StringTokenizer(queryString, "&");
while (stringTokenizer.hasMoreTokens()) {
String key, value;
String keyAndValue = stringTokenizer.nextToken();
int indexOfEqual = keyAndValue.indexOf("=");
if (indexOfEqual >= 0) {
key = keyAndValue.substring(0, indexOfEqual);
if ((indexOfEqual + 1) < keyAndValue.length()) {
value = keyAndValue.substring(indexOfEqual + 1);
} else {
value = "";
}
} else {
key = keyAndValue;
value = "";
}
if (key.length() > 0) returnMap.put(key, value);
}
return returnMap;
}
Now you can get all the value of dynamic fields on the form.
Hope this solution is helpful for you.
Related
In one of my APIs Swagger specification I've created a CSV array parameter like this:
...
- name: formats
in: query
description: The format(s) in which the generated report should be returned.
type: array
collectionFormat: csv
items:
type: string
enum:
- pdf
- png
...
Then via 'swagger-codegen' I generated the server classes with -l 'jaxrs', and the client classes with -l 'java'.
The problem I'm having is that my client classes are creating the HTTP request like this:
http://.....:.../.../?...&formats=value1,value2&...
And when the request is handled by my server classes I'm getting an array with a single String with value 'value1,value2'
If my client classes created the HTTP request like this:
http://.....:.../.../?...&formats=value1&formats=value2&....
Then my server classes would correctly instance an array with a 2 values 'value1' and 'value2'
In my generated client classes the function that is generating the query string value is like this:
/**
* Format to {#code Pair} objects.
*
* #param collectionFormat collection format (e.g. csv, tsv)
* #param name Name
* #param value Value
* #return A list of Pair objects
*/
public List<Pair> parameterToPairs(String collectionFormat, String name, Object value){
List<Pair> params = new ArrayList<Pair>();
// preconditions
if (name == null || name.isEmpty() || value == null) return params;
Collection valueCollection = null;
if (value instanceof Collection) {
valueCollection = (Collection) value;
} else {
params.add(new Pair(name, parameterToString(value)));
return params;
}
if (valueCollection.isEmpty()){
return params;
}
// get the collection format
collectionFormat = (collectionFormat == null || collectionFormat.isEmpty() ? "csv" : collectionFormat); // default: csv
// create the params based on the collection format
if (collectionFormat.equals("multi")) {
for (Object item : valueCollection) {
params.add(new Pair(name, parameterToString(item)));
}
return params;
}
String delimiter = ",";
if (collectionFormat.equals("csv")) {
delimiter = ",";
} else if (collectionFormat.equals("ssv")) {
delimiter = " ";
} else if (collectionFormat.equals("tsv")) {
delimiter = "\t";
} else if (collectionFormat.equals("pipes")) {
delimiter = "|";
}
StringBuilder sb = new StringBuilder() ;
for (Object item : valueCollection) {
sb.append(delimiter);
sb.append(parameterToString(item));
}
params.add(new Pair(name, sb.substring(1)));
return params;
}
Maybe the problem isn't in the generated classes but either in:
In my web server's configuration. I'm using 'Wildfly 10.x'
In my Jersey configuration in 'web.xml'
Any ideas?
I'm currently using Jersey REST to create a webpage that has a list of birds and taxonomy number, with a link to a page specifically about the bird in question. While my links work between the two pages, and my Bird Name and Taxonomy Number appear, I can't get the order or family name to appear. Following is the code in question.
#Path("/birdslist")
public class BirdsList extends Birds {
#GET
#Path("/all")
#Produces("text/html")
public String all() {
Iterator iterator = birdnames.keySet().iterator();
String page = "<html><title>All Birds</title><body>";
page += "<p>This is the list of all birds. <br> Click the taxonomy number of the bird you wish to view in detail.</p>";
while(iterator.hasNext()){
Object key = iterator.next();
String value = birdnames.get(key);
HashSet fam = family.get(key);
HashSet ord = order.get(key);
}
for (String key : birdnames.keySet()) {
page += String.format("<p>Name:%s <br> Taxonomy Number:<a href=%s>%s</a></p>",birdnames.get(key),key,
key);
getBird(key);
}
page += "</body></html>";
return page;
}
#GET
#Path("{key}")
#Produces("text/html")
public String getBird(#PathParam("key") String key) {
String page = "<html><title>Bird #: {key}</title><body>";
page += String.format("<p>This page contains info on the %s</p>",birdnames.get(key));
page += String.format("<p>Name:%s <br> Taxonomy Number:%s <br> Family:%s <br> Order:%s</p>",birdnames.get(key),key,family.get(key),order.get(key));
page += "<p>Please click <a href=all>here</a> to return to the list of all birds.</p>";
page += "</body></html>";
return page;
}
}
The family and order are saved in a HashSet that is inside of a hashmap, while bird name is in a hashmap. It was written over from a csv file and converted into hashmaps. Following is that code.
public class Birds {
HashMap<String,String> birdnames;
HashMap<String,HashSet<String>> family;
HashMap<String,HashSet<String>> order;
/**
Constructor reads the CSV of all birds
*/
public Birds() {
// long path to eBirds assuming Maven "mvn exec:java" is many levels up
String fileName = "src/main/java/com/example/rest/eBirds.csv";
boolean firstLine = true;
this.birdnames = new HashMap<String,String>();
this.family = new HashMap<String,HashSet<String>>();
this.order = new HashMap<String,HashSet<String>>();
try {
BufferedReader R = new BufferedReader(new FileReader(fileName));
String line;
while (true) {
line = R.readLine();
if (line == null) break;
if (firstLine) { // ignore the first line, it's not a bird
firstLine = false;
continue;
}
String[] fields = line.split(",");
if (!fields[1].equalsIgnoreCase("species")) continue; // ignore all but species records
birdnames.put(fields[0],fields[4]); // add this bird to name table
// extract the order name from fields[6]
String ordername = fields[6];
if (!order.containsKey(ordername)) { // if needed, create first-time order set
order.put(ordername,new HashSet<String>());
}
order.get(ordername).add(fields[0]); // new order member by number for lookup
// extract the family name from fields[7] -- removing quotes first if needed
String famname = fields[7].replace("\"","");
if (!family.containsKey(famname)) { // if needed, create first-time family set
family.put(famname,new HashSet<String>());
}
family.get(famname).add(fields[0]); // new family member by number for lookup
}
}
catch (IOException e) { System.out.println("Stack trace: " + e); }
}
...
}
I've never used HashSets before, that was part of the given info to us. Our assignment was to create a list page and pages specific to each bird and link between the two. I just can't get these last two values to appear correctly. Can anyone help?
Here you use the same key for all values, birdnames, family and order:
while(iterator.hasNext()){
Object key = iterator.next();
String value = birdnames.get(key);
HashSet fam = family.get(key);
HashSet ord = order.get(key);
}
But you initialize them with different keys:
// extract the order name from fields[6]
String ordername = fields[6];
if (!order.containsKey(ordername))
{ // if needed, create first-time order set
order.put(ordername, new HashSet<>());
}
order.get(ordername).add(fields[0]); // new order member by number for lookup
Here the key would be fields[6] and not the birdnames key.
If you want to keep using the same key, you could do the following for the orders:
if (!order.containsKey(fields[0]))
{
order.put(fields[0], new HashSet<>());
}
order.get(fields[0]).add(fields[6]);
Then you can use:
HashSet ord = order.get(key);
And you will receive all the orders for that bird name.
If you don't want to change that and still use the same key you could do something like the following, but that is highly discouraged as it destroys the purpose of using a map in the first place:
Set<String> ord = new HashSet<>();
for (String tmp : order.keySet())
{
if (order.get(tmp).contains(key))
ord.add(tmp);
}
Here ord would contain all the orders for the "key".
As you can see, you need to do much more redundant work, if you don't switch value and "key".
So I have a collection of emails and what I want to do is use them to output unique triplets (sender email, receiver email, timestamp) like so:
user1#stackoverflow.com user2#stackoverflow.com 09/12/2009 16:45
user1#stackoverflow.com user9#stackoverflow.com 09/12/2009 18:45
user3#stackoverflow.com user4#stackoverflow.com 07/05/2008 12:29
In the above example user 1 sent a single email to multiple recipients (user 2 and user 9). To store the recipients, I created a data structure EdgeWritable(implements WritableComparable)that will hold the Sender and Recipient email addresses as well as a Timestamp.
My mapper looks like this:
private final EdgeWritable edge = new EdgeWritable(); // Data structure for triplets.
private final NullWritable noval = NullWritable.get();
...
#Override
public void map(Text key, BytesWritable value, Context context)
throws IOException, InterruptedException {
byte[] bytes = value.getBytes();
Scanner scanner = new Scanner(new ByteArrayInputStream(bytes), "UTF-8");
String from = null; // Sender's Email address
ArrayList<String> recipients = new ArrayList<String>(); // List of recipients' Email addresses
long millis = -1; // Date
// Parse information from file
while(scanner.hasNext()) {
String line = scanner.nextLine();
if (line.startsWith("From:")) {
from = procFrom(stripCommand(line, "From:")); // Get sender e-mail address.
} else if (line.startsWith("To:")) {
procRecipients(stripCommand(line, "To:"), recipients); // Populate recipients into a list.
} else if (line.startsWith("Date:")) {
millis = procDate(stripCommand(line, "Date:")); // Get timestamp.
if (line.equals("")) { // Empty line indicates the end of the header
break;
}
}
scanner.close();
// Emit EdgeWritable as intermediate key containing Sender, Recipient and Timestamp.
if (from != null && recipients.size() > 0 && millis != -1) {
//EdgeWritable has 2 Text values (ew[0] and ew[1]) and a Timestamp. ew[0] is the sender, ew[1] is a recipient.
edge.set(0, from); // Set ew[0]
for(int i = 0; i < recipients.size(); i++) {
edge.set(1, recipients.get(i)); // Set edge from sender to each recipient i.
edge.setTS(millis); // Set date.
context.write(edge, noval); // Emit the edge as an intermediate key with a null value.
}
}
}
...
My reducer simply formats the date and outputs the edges:
public void reduce(EdgeWritable key, Iterable<NullWritable> values, Context context) throws IOException, InterruptedException {
String date = MailReader.sdf.format(edge.getTS());
out.set(edge.get(0) + " " + edge.get(1) + " " + date); // same edge from Mapper (an EdgeWritable).
context.write(noval, out); // same noval from Mapper (a NullWritable).
}
Using EdgeWritable as the intermediate key and NullWritable as the value (in mapper) is a requirement, I'm not permitted to use other methods. This is my first Hadoop / MapReduce program and I just wanted to know that I'm going in the right direction. I have looked at plenty of MapReduce examples online and have never seen key/value pairs being emitted in a for-loop the way I have done it. I feel like I'm missing some sort of trick here, but using a for-loop in this way is the only approach I can think of.
Is this 'bad'? I hope this is clear but please let me know if any further clarification is needed.
Map method gets called for each record, so your array list is having only 1 record for every call. Declare your array list at class level so that u can store values for all records. Then in clean up method you can do the emit logic which you have written inside map. Try this and let me know if that works.
Is it possible to parse a delimited file and find column datatypes? e.g
Delimited file:
Email,FirstName,DOB,Age,CreateDate
test#test1.com,Test User1,20/01/2001,24,23/02/2015 14:06:45
test#test2.com,Test User2,14/02/2001,24,23/02/2015 14:06:45
test#test3.com,Test User3,15/01/2001,24,23/02/2015 14:06:45
test#test4.com,Test User4,23/05/2001,24,23/02/2015 14:06:45
Output:
Email datatype: email
FirstName datatype: Text
DOB datatype: date
Age datatype: int
CreateDate datatype: Timestamp
The purpose of this is to read a delimited file and construct a table creation query on the fly and insert data into that table.
I tried using apache validator, I believe we need to parse the complete file in order to determine each column data type.
EDIT: The code that I've tried:
CSVReader csvReader = new CSVReader(new FileReader(fileName),',');
String[] row = null;
int[] colLength=(int[]) null;
int colCount = 0;
String[] colDataType = null;
String[] colHeaders = null;
String[] header = csvReader.readNext();
if (header != null) {
colCount = header.length;
}
colLength = new int[colCount];
colDataType = new String[colCount];
colHeaders = new String[colCount];
for (int i=0;i<colCount;i++){
colHeaders[i]=header[i];
}
int templength=0;
String tempType = null;
IntegerValidator intValidator = new IntegerValidator();
DateValidator dateValidator = new DateValidator();
TimeValidator timeValidator = new TimeValidator();
while((row = csvReader.readNext()) != null) {
for(int i=0;i<colCount;i++) {
templength = row[i].length();
colLength[i] = templength > colLength[i] ? templength : colLength[i];
if(colHeaders[i].equalsIgnoreCase("email")){
logger.info("Col "+i+" is Email");
} else if(intValidator.isValid(row[i])){
tempType="Integer";
logger.info("Col "+i+" is Integer");
} else if(timeValidator.isValid(row[i])){
tempType="Time";
logger.info("Col "+i+" is Time");
} else if(dateValidator.isValid(row[i])){
tempType="Date";
logger.info("Col "+i+" is Date");
} else {
tempType="Text";
logger.info("Col "+i+" is Text");
}
logger.info(row[i].length()+"");
}
Not sure if this is the best way of doing this, any pointers in the right direction would be of help
If you wish to write this yourself rather than use a third party library then probably the easiest mechanism is to define a regular expression for each data type and then check if all fields satisfy it. Here's some sample code to get you started (using Java 8).
public enum DataType {
DATETIME("dd/dd/dddd dd:dd:dd"),
DATE("dd/dd/dddd",
EMAIL("\\w+#\\w+"),
TEXT(".*");
private final Predicate<String> tester;
DateType(String regexp) {
tester = Pattern.compile(regexp).asPredicate();
}
public static Optional<DataType> getTypeOfField(String[] fieldValues) {
return Arrays.stream(values())
.filter(dt -> Arrays.stream(fieldValues).allMatch(dt.tester)
.findFirst();
}
}
Note that this relies on the order of the enum values (e.g. testing for datetime before date).
Yes it is possible and you do have to parse the entire file first. Have a set of rules for each data type. Iterate over every row in the column. Start of with every column having every data type and cancel of data types if a row in that column violates a rule of that data type. After iterating the column check what data type is left for the column. Eg. Lets say we have two data types integer and text... rules for integer... well it must only contain numbers 0-9 and may begin with '-'. Text can be anything.
Our column:
345
-1ab
123
The integer data type would be removed by the second row so it would be text. If row two was just -1 then you would be left with integer and text so it would be integer because text would never be removed as our rule says text can be anything... you dont have to check for text basically if you left with no other data type the answer is text. Hope this answers your question
I have slight similar kind of logic needed for my project. Searched lot but did not get right solution. For me i need to pass string object to the method that should return datatype of the obj. finally i found post from #sprinter, it looks similar to my logic but i need to pass string instead of string array.
Modified the code for my need and posted below.
public enum DataType {
DATE("dd/dd/dddd"),
EMAIL("#gmail"),
NUMBER("[0-9]+"),
STRING("^[A-Za-z0-9? ,_-]+$");
private final String regEx;
public String getRegEx() {
return regEx;
}
DataType(String regEx) {
this.regEx = regEx;
}
public static Optional<DataType> getTypeOfField(String str) {
return Arrays.stream(DataType.values())
.filter(dt -> {
return Pattern.compile(dt.getRegEx()).matcher(str).matches();
})
.findFirst();
}
}
For example:
Optional<DataType> dataType = getTypeOfField("Bharathiraja");
System.out.println(dataType);
System.out.println(dataType .get());
Output:
Optional[STRING]
STRING
Please note, regular exp pattern is vary based on requirements, so modify the pattern as per your need don't take as it is.
Happy Coding !
I'm looking for a way to translate an EMV response with Java like with this online option:
http://www.emvlab.org/tlvutils/
where you put something like this EMV response:
6f3a8407a0000000031010a52f500b56495341204352454449548701015f2d086573656e707466729f12074352454449544f9f1101019f38039f1a02
and it will show you everything perfectly, I started doing something by myself but then I realize that maybe we could have two 9F38(PDOL) Strings not neccesary two same tags cuz I know it's impossible but maybe the value of a tag end in 9F and the start of the next tag would be 38 and that would give me an error... Now that I mention it, is that possible? cuz that was one of the main reasons why I stopped doing my own function..
Does any of you have written a function to do this already?
Thanks!
https://github.com/binaryfoo/emv-bertlv should do the trick.
Using your example, the following code:
List<DecodedData> decoded = new RootDecoder().decode("6f3a8407a0000000031010a52f500b56495341204352454449548701015f2d086573656e707466729f12074352454449544f9f1101019f38039f1a02", "EMV", "constructed");
new DecodedWriter(System.out).write(decoded, "");
Will output:
[6F (FCI template)] 8407A0000000031010A52F500B56495341204352454449548701015F...1A02
[84 (dedicated file name)] A0000000031010
[A5 (FCI proprietary template)] 500B56495341204352454449548701015F2D086573656E707466729F...1A02
[50 (application label)] VISA CREDIT
[87 (application priority indicator)] 01
[5F2D (language preference)] esenptfr
[9F12 (application preferred name)] CREDITO
[9F11 (issuer code table index)] 01
[9F38 (PDOL - Processing data object list)] 9F1A02
9F1A (terminal country code) 2 bytes
This project has code to deal with EMV data http://code.google.com/p/javaemvreader/
You are on the right track. You can easily build your own EMV parser using the technique call TLV (Tag Length Value). Your raw data always comes back with a Tag, then after the tag is the length, using the length can get you the value.
So create three methods
method 1: Contains all the short tags
method 2: Contains all the long tags
method 3: Contains all the proprietary tags
So when you pass in your raw emv tag:
6f3a8407a0000000031010a52f500b56495341204352454449548701015f2d086573656e707466729f12074352454449544f9f1101019f38039f1a02
Loop through all those three methods, it will give you all the nice information that you need.
Use below function which will gives you hashmap of TLV value
public LinkedHashMap parseBERTLVTag(String tlv) throws DecoderException
{
if(tlv==null || "".equalsIgnoreCase(tlv)){
return null;
}
System.out.println("============= START ["+tlv+"]==================");
boolean inTagRead= true;
Map<String,String> tags= new HashMap<>();
StringBuilder _tmp = new StringBuilder();
String lastTag = "";
int old_index = 0;
boolean isFirstTagByte = true;
int len = 0;
boolean more=true;
String data = "";
while (more)
{
len = 0;
String hByte = tlv.substring(old_index,(old_index = old_index+2));
if(inTagRead)
{
if(isLastTagByte(hByte, isFirstTagByte))
{
inTagRead=false;
_tmp.append(hByte);
lastTag = _tmp.toString();
System.out.println("Tag["+lastTag+"]");
tags.put(lastTag, null);
_tmp= new StringBuilder();
}else
{
_tmp.append(hByte);
}
isFirstTagByte = false;
}else//Length
{
isFirstTagByte = true;
if(isLastLengthByte(hByte)) {
inTagRead=true;
_tmp.append(hByte);
len = Integer.parseInt(_tmp.toString(), 16 );
//read len*2
System.out.println(" Length ["+len+"]");
data = tlv.substring(old_index, (old_index = old_index+len*2));
String tmpData= lastTag+":"+_tmp.toString()+":h"+data;
System.out.println(" Data ["+tmpData+"]");
_tmp = new StringBuilder();
tags.put(lastTag, tmpData);
}else
{
_tmp.append(hByte);
}
}
more= tlv.length()<=old_index?false:true;
System.out.println("tag "+lastTag+" value "+data+" length "+len);
if(lastTag.length() > 0 && data.length() > 0 && len > 0){
if(!map.containsKey(lastTag)){
map.put(lastTag,new TLVModel().setTag(lastTag).setLength(len).setValue(data));
}
}
}//END OF WHILE
System.out.println("------------ as MAP ---------------------");
System.out.println("size "+map.size());
for (Map.Entry mp:map.entrySet()){
System.out.println("key "+mp.getKey()+" value "+mp.getValue());
}
return map.size() > 0 ? map : null;
}