Brief description:
Basically, inside my code i want to add a List <String> to every <key> inside a ListValuedMap <String, List<String>> as a <value>. I did some testing on the created ListValuedMap spreadNormal with
System.out.println(spreadNormal.keySet());
System.out.println(spreadNormal.values());
and it showed me, that the keys are inside the map (unsorted), but the corresponding values are empty. I am deleting the inserted List <String> after using Collections.addAll(String, List<String>) with list.clear() after each loop.
I would have expected, that indeed a copy of these values stay in my ListValuedMap but my results are:
[Agios Pharmaceuticals Inc., Vicor Corp., EDP Renov�veis S.A., Envista Holdings Corp., JENOPTIK AG,...]
[[], [], [], [], [], ...]
My expected result is more like this:
[Agios Pharmaceuticals Inc., ...] =
[["US00847X1046, "30,60", "30,80", "0,65"], ....]
Can you provide some explanations on that? Is the default behaviour of the Collections.addAll method to copy the reference to an object, instead of the object itself?
The corresponding code section is highlighted with // ++++++++++
Full code example (Eclipse):
import java.io.*;
import org.jsoup.Jsoup;
import org.jsoup.nodes.Document;
import org.jsoup.select.Elements;
import java.util.*;
import org.apache.commons.collections4.ListValuedMap;
import org.apache.commons.collections4.MultiSet;
import org.apache.commons.collections4.multimap.ArrayListValuedHashMap;
public class Webdata {
public static void main(String[] args) throws IOException
{
long start = System.currentTimeMillis();
parseData();
System.out.println(secondsElapsed(start)+" seconds processing time");
};
private static void parseData() throws IOException
{
List <String> subdirectories = new ArrayList<>();
String chars = "ABCDEFGHIJKLMNOPQRSTUVWXYZ";
String errorMessage1 = "String formatting problem";
String errorMessage2 = "String object non existent";
for (int i=0; i< chars.length(); i++)
{
subdirectories.add("https://www.tradegate.de/indizes.php?buchstabe="+chars.charAt(i));
}
List <String> stockMetadata = new ArrayList<>();
ListValuedMap <String, List<String>> nonError = new ArrayListValuedHashMap<>();
ListValuedMap <String, List<String>> numFormatError = new ArrayListValuedHashMap<>();
ListValuedMap <String, List<String>> nullPointerError = new ArrayListValuedHashMap<>();
ListValuedMap <String, List<String>> spreadTooHigh = new ArrayListValuedHashMap<>();
ListValuedMap <String, List<String>> spreadNormal = new ArrayListValuedHashMap<>();
int cap1 = 44;
int cap2 = 56;
for (int suffix = 0; suffix <chars.length(); suffix++)
{
Document doc = Jsoup.connect(subdirectories.get(suffix).toString()).get();
Elements htmlTableRows = doc.getElementById("kursliste_abc").select("tr");
htmlTableRows.forEach( tr->
{
String stockName = tr.child(0).text();
String bid_price = tr.child(1).text();
String ask_price = tr.child(2).text();
String isin = tr.child(0).selectFirst("a").absUrl("href").substring(cap1,cap2);
// +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
try
{
if (calcSpread(bid_price, ask_price) < 5)
{
Collections.addAll(stockMetadata, isin, bid_price, ask_price, calcSpread(bid_price, ask_price).toString());
spreadNormal.put(stockName,stockMetadata);
}
else if (calcSpread(bid_price, ask_price) > 5)
{
Collections.addAll(stockMetadata, isin, bid_price, ask_price, calcSpread(bid_price, ask_price).toString());
spreadTooHigh.put(stockName,stockMetadata);
}
stockMetadata.clear();
}
catch (NumberFormatException e)
{
Collections.addAll(stockMetadata, e.getMessage());
numFormatError.put(stockName, stockMetadata);
stockMetadata.clear();
}
catch (NullPointerException Ne)
{
Collections.addAll(stockMetadata, Ne.getMessage());
nullPointerError.put(stockName, stockMetadata);
stockMetadata.clear();
} //end of try-catch
// +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
}); //end of for-each loop htmlTableRows
} //end of JSOUP method
System.out.println(spreadNormal.keySet());
System.out.println(spreadNormal.values());
} //end of parseData()
public static Float calcSpread (String arg1, String arg2)
{
try
{
Float bid = Float.parseFloat(arg1.replace("," , "."));
Float ask = Float.parseFloat(arg2.replace("," , "."));
Float spread = ((ask-bid)/ask)*100;
return spread;
}
catch (NumberFormatException e)
{
return null;
}
}
public static Long secondsElapsed(Long start) {
Long startTime = start;
Long endTime = System.currentTimeMillis();
Long timeDifference = endTime - startTime;
return timeDifference / 1000;
}
} //end of class
pom.xml:
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>TEST</groupId>
<artifactId>TEST</artifactId>
<version>0.0.1-SNAPSHOT</version>
<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
<maven.compiler.source>1.7</maven.compiler.source>
<maven.compiler.target>1.7</maven.compiler.target>
</properties>
<dependencies>
<dependency>
<groupId>org.jsoup</groupId>
<artifactId>jsoup</artifactId>
<version>1.11.3</version>
</dependency>
<dependency>
<groupId>org.apache.commons</groupId>
<artifactId>commons-collections4</artifactId>
<version>4.4</version>
</dependency>
<dependency>
<groupId>org.apache.commons</groupId>
<artifactId>commons-lang3</artifactId>
<version>3.0</version>
</dependency>
</dependencies>
<build>
<sourceDirectory>src</sourceDirectory>
<plugins>
<plugin>
<artifactId>maven-compiler-plugin</artifactId>
<version>3.8.1</version>
<configuration>
<release>18</release>
</configuration>
</plugin>
</plugins>
</build>
</project>
There is nothing wrong with Collections.addAll.
I believe you expect all of your maps to be ListValuedMap<String, String> which is basically a Map<String, List<String>>. As such, your ListValuedMap<String, List<String>> is a Map<String, List<List<String>>>.
Just update each of your maps to be like below so each key is mapped to a List<String>:
ListValuedMap<String, String> nonError = new ArrayListValuedHashMap<>();
ListValuedMap<String, String> numFormatError = new ArrayListValuedHashMap<>();
ListValuedMap<String, String> nullPointerError = new ArrayListValuedHashMap<>();
ListValuedMap<String, String> spreadTooHigh = new ArrayListValuedHashMap<>();
ListValuedMap<String, String> spreadNormal = new ArrayListValuedHashMap<>();
And then, instead of using ListValuedMap.put(K key, V value) you have to use ListValuedMap.putAll(K key, Iterable<? extends V> values) like this:
spreadNormal.putAll(stockName, stockMetadata);
spreadTooHigh.putAll(stockName, stockMetadata);
numFormatError.putAll(stockName, stockMetadata);
nullPointerError.putAll(stockName, stockMetadata);
The putAll method will iterate over the stockMetadata iterable and add the elements one by one into the map's underlying list.
Related
private void getUsersWithin24Hours(String id, Map < String, Object > payload) throws JSONException {
JSONObject json = new JSONObject(String.valueOf(payload.get("data")));
Query query = new Query();
query.addCriteria(Criteria.where("user_id").is(id).and("timezone").in(json.get("timezone")).and("gender").in(json.get("gender")).and("locale").in(json.get("language")).and("time").gt(getDate()));
mongoTemplate.getCollection("user_log").distinct("user_id", query.getQueryObject());
}
I was going to made a query and get result from mongodb and I was succeed with mongo terminal command:
db.getCollection('user_log').find({"user_id" : "1", "timezone" : {$in: [5,6]}, "gender" : {$in : ["male", "female"]}, "locale" : {$in : ["en_US"]}, "time" : {$gt : new ISODate("2017-01-26T16:57:52.354Z")}})
but from java when I was trying it gave me below error.
org.bson.codecs.configuration.CodecConfigurationException: Can't find
a codec for class org.json.JSONArray
What is the ideal way to do this?
Hint : actually I think in my code error occurred of this part json.get("timezone"). because it contains array. When I am using hardcode string arrays this code works
You don't have to use JSONObject/JSONArray for conversion.
Replace with below line if the payload.get("data") is Map
BasicDBObject json = new BasicDBObject(payload.get("data"));
Replace with below line if the payload.get("data") holds json string.
BasicDBObject json =(BasicDBObject) JSON.parse(payload.get("data"));
Here's an example of MongoDB from MongoDB University course with a MongoDB database named "students" with a collection named "grades" :
pom.xml
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>com.mongodb</groupId>
<artifactId>test</artifactId>
<version>1.0-SNAPSHOT</version>
<packaging>jar</packaging>
<name>test</name>
<url>http://maven.apache.org</url>
<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
</properties>
<dependencies>
<dependency>
<groupId>org.mongodb</groupId>
<artifactId>mongodb-driver</artifactId>
<version>3.2.2</version>
</dependency>
</dependencies>
</project>
com/mongo/Main.java
package com.mongo;
import com.mongodb.MongoClient;
import com.mongodb.client.*;
import org.bson.Document;
import org.bson.conversions.Bson;
import javax.print.Doc;
public class Main {
public static void main(String[] args) {
MongoClient client = new MongoClient();
MongoDatabase database = client.getDatabase("students");
final MongoCollection<Document> collection = database.getCollection("grades");
Bson sort = new Document("student_id", 1).append("score", 1);
MongoCursor<Document> cursor = collection.find().sort(sort).iterator();
try {
Integer student_id = -1;
while (cursor.hasNext()) {
Document document = cursor.next();
// Doing more stuff
}
} finally {
cursor.close();
}
}
}
I'm trying to update the imports in our custom claims java files. So far what I have found was nothing much had changed but one import really. The org.wso2.carbon.apimgt.impl.token.URLSafeJWTGenerator changed to org.wso2.carbon.apimgt.keymgt.token.URLSafeJWTGenerator. When I add this change to the file it says the populateCustomClaims method no longer works.
JAVA CODE
import edu.wso2.is.helper.DomainEntity;
import edu.wso2.is.helper.DomainEntityHelper;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.wso2.carbon.apimgt.impl.APIConstants;
import org.wso2.carbon.apimgt.impl.dto.APIKeyValidationInfoDTO;
import org.wso2.carbon.apimgt.impl.token.URLSafeJWTGenerator;
import org.wso2.carbon.apimgt.api.*;
import org.wso2.carbon.user.core.util.UserCoreUtil;
import org.apache.commons.codec.binary.Base64;
import java.util.HashMap;
import java.util.Map;
public class CustomTokenGenerator extends URLSafeJWTGenerator {
private static final Log log = LogFactory.getLog(CustomTokenGenerator.class);
static String DOMAIN_DIALECT = "http://domain.edu/claims";
private final DOMAINEntityHelper DOMAINEntityHelper = new DOMAINEntityHelper();
public CustomTokenGenerator() {
}
//there is no access to the api call headers, etc. only what was passed in the DTO
public Map<String, String> populateCustomClaims(APIKeyValidationInfoDTO keyValidationInfoDTO, String apiContext, String version, String accessToken)
throws APIManagementException {
if (log.isDebugEnabled())
log.debug("populateCustomClaims starting");
Map<String, String> map = new HashMap<>();//map for custom claims
Map<String, String> claims = super.populateCustomClaims(keyValidationInfoDTO,apiContext,version,accessToken);
boolean isApplicationToken =
keyValidationInfoDTO.getUserType().equalsIgnoreCase(APIConstants.ACCESS_TOKEN_USER_TYPE_APPLICATION) ? true : false;
if (isApplicationToken) {
if (log.isDebugEnabled())
log.debug("Application Token detected - no resource owner claims will be added");
}
else {
String netid = extractNetId(keyValidationInfoDTO.getEndUserName());
if (log.isDebugEnabled())
log.debug("adding resource owner claims to map - netid " + netid);
map = addResourceOwnerClaims(netid, map);
}
String consumerKey = keyValidationInfoDTO.getConsumerKey();
String dialect = getDialectURI();
String subscriberNetId = extractNetId(keyValidationInfoDTO.getSubscriber());
if (log.isDebugEnabled())
log.debug("adding client claims to map - subscriberNetId " + subscriberNetId + " client_id " + consumerKey);
map.put(dialect + "/client_id",consumerKey);
map = addClientClaims(consumerKey, subscriberNetId, map);
if (log.isDebugEnabled())
log.debug("populateCustomClaims ending");
return map;
}
private Map<String, String> addClientClaims(String consumerKey, String subscriberNetId, Map<String, String> map) {
if (log.isDebugEnabled())
log.debug("addClientClaims starting");
if (consumerKey == null) {
return map;
}
boolean isConsumerClaims = true;
DOMAINEntity identifiers = DOMAINEntityHelper.getDOMAINEntityFromConsumerKey(consumerKey);
if (identifiers == null) {
if (log.isDebugEnabled())
log.debug("No claims found for consumerKey, using subscriberNetId");
isConsumerClaims = false;
identifiers = DOMAINEntityHelper.getDOMAINEntityFromNetId(subscriberNetId);
if (identifiers == null)
return map;
}
if (isConsumerClaims)
map.put(DOMAIN_DIALECT + "/client_claim_source", "CLIENT_ID");
else
map.put(DOMAIN_DIALECT + "/client_claim_source", "CLIENT_SUBSCRIBER");
map.put(DOMAIN_DIALECT + "/client_subscriber_net_id", subscriberNetId);
map.put(DOMAIN_DIALECT + "/client_person_id", identifiers.getPersonId());
map.put(DOMAIN_DIALECT + "/client_net_id", identifiers.getNetId());
map.put(DOMAIN_DIALECT + "/client_surname", identifiers.getSurname());
if (log.isDebugEnabled())
log.debug("addClientClaims ending");
return map;
}
/* adds resource owner credentials to the map */
private Map<String, String> addResourceOwnerClaims(String netid, Map<String, String> map) {
if (log.isDebugEnabled())
log.debug("addResourceOwnerClaims starting");
if (netid == null) {
return map;
}
DOMAINEntity identifiers = DOMAINEntityHelper.getDOMAINEntityFromNetId(netid);
if (identifiers == null) {
return map;
}
map.put(DOMAIN_DIALECT + "/resourceowner_person_id", identifiers.getPersonId());
map.put(DOMAIN_DIALECT + "/resourceowner_domain_id", identifiers.getDomainId());
map.put(DOMAIN_DIALECT + "/resourceowner_surname", identifiers.getSurname());
map.put(DOMAIN_DIALECT + "/resourceowner_rest_of_name", identifiers.getRestOfName());
map.put(DOMAIN_DIALECT + "/resourceowner_surname_position", identifiers.getSurnamePosition());
if (log.isDebugEnabled())
log.debug("addResourceOwnerClaims ending");
return map;
}
private String extractNetId(String carbonIdentifier) {
if (log.isDebugEnabled()) {
log.debug("extractNetId starting");
log.debug("step 1: carbonIdentifier is " + carbonIdentifier);
}
String netid = UserCoreUtil.removeDomainFromName(carbonIdentifier);
if (log.isDebugEnabled())
log.debug("step 2: after remove domain netid is " + netid);
if (netid != null) {
if (netid.endsWith("#carbon.super")) {
netid = netid.replace("#carbon.super", "");
}
}
if (log.isDebugEnabled())
log.debug("extractNetId ending with result " + netid);
return netid;
}
}
I also updated the pom.xml dependency
XML CODE
<?xml version="1.0" encoding="utf-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>edu.wso2.is</groupId>
<artifactId>edu.wso2.is.CustomClaimsGenerator</artifactId>
<version>1.3.0</version>
<packaging>jar</packaging>
<name>Custom Claims Generator</name>
<repositories>
<repository>
<releases>
<enabled>true</enabled>
<updatePolicy>daily</updatePolicy>
<checksumPolicy>ignore</checksumPolicy>
</releases>
<id>wso2-nexus</id>
<url>http://maven.wso2.org/nexus/content/groups/wso2-public/</url>
</repository>
</repositories>
<dependencies>
<dependency>
<groupId>org.wso2.carbon.identity</groupId>
<artifactId>org.wso2.carbon.identity.core</artifactId>
<version>5.2.2</version>
</dependency>
<dependency>
<groupId>org.wso2.carbon.identity</groupId>
<artifactId>org.wso2.carbon.identity.application.common</artifactId>
<version>5.2.2</version>
</dependency>
<dependency>
<groupId>commons-codec.wso2</groupId>
<artifactId>commons-codec</artifactId>
<version>1.4.0.wso2v1</version>
</dependency>
<dependency>
<groupId>org.wso2.carbon.apimgt</groupId>
<artifactId>org.wso2.carbon.apimgt.keymgt</artifactId>
<version>6.0.4</version>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<version>2.3.1</version>
<configuration>
<source>1.7</source>
<target>1.7</target>
</configuration>
</plugin>
</plugins>
</build>
Any help or point in some direction would be much appreciated.
THANK YOU!
populateCustomClaims() signature is changed like this in APIM 2.0.0. Now it takes a TokenValidationContext object.
public Map<String, String> populateCustomClaims(TokenValidationContext validationContext)
throws APIManagementException {
The code is here.
I have a TSV file, where the first line is the header. I want to create a JavaPairRDD from this file. Currently, I'm doing so with the following code:
TsvParser tsvParser = new TsvParser(new TsvParserSettings());
List<String[]> allRows;
List<String> headerRow;
try (BufferedReader reader = new BufferedReader(new FileReader(myFile))) {
allRows = tsvParser.parseAll((reader));
//Removes the header row
headerRow = Arrays.asList(allRows.remove(0));
}
JavaPairRDD<String, MyObject> myObjectRDD = javaSparkContext
.parallelize(allRows)
.mapToPair(row -> new Tuple2<>(row[0], myObjectFromArray(row)));
I was wondering if there was a way to have the javaSparkContext read and process the file directly instead of splitting the operation into two parts.
EDIT: This is not a duplicate of How do I convert csv file to rdd, because I'm looking for an answer in Java, not Scala.
use https://github.com/databricks/spark-csv
import org.apache.spark.sql.SQLContext
SQLContext sqlContext = new SQLContext(sc);
DataFrame df = sqlContext.read()
.format("com.databricks.spark.csv")
.option("inferSchema", "true")
.option("header", "true")
.option("delimiter","\t")
.load("cars.csv");
df.select("year", "model").write()
.format("com.databricks.spark.csv")
.option("header", "true")
.save("newcars.csv");
Try below code to read CSV file and create JavaPairRDD.
public class SparkCSVReader {
public static void main(String[] args) {
SparkConf conf = new SparkConf().setAppName("CSV Reader");
JavaSparkContext sc = new JavaSparkContext(conf);
JavaRDD<String> allRows = sc.textFile("c:\\temp\\test.csv");//read csv file
String header = allRows.first();//take out header
JavaRDD<String> filteredRows = allRows.filter(row -> !row.equals(header));//filter header
JavaPairRDD<String, MyCSVFile> filteredRowsPairRDD = filteredRows.mapToPair(parseCSVFile);//create pair
filteredRowsPairRDD.foreach(data -> {
System.out.println(data._1() + " ### " + data._2().toString());// print row and object
});
sc.stop();
sc.close();
}
private static PairFunction<String, String, MyCSVFile> parseCSVFile = (row) -> {
String[] fields = row.split(",");
return new Tuple2<String, MyCSVFile>(row, new MyCSVFile(fields[0], fields[1], fields[2]));
};
}
You can also use Databricks spark-csv (https://github.com/databricks/spark-csv). spark-csv is also included in Spark 2.0.0.
Apache Spark 2.x have built-in csv reader so you don't have to use https://github.com/databricks/spark-csv
import org.apache.spark.sql.Dataset;
import org.apache.spark.sql.Row;
import org.apache.spark.sql.SparkSession;
/**
*
* #author cpu11453local
*/
public class Main {
public static void main(String[] args) {
SparkSession spark = SparkSession.builder()
.master("local")
.appName("meowingful")
.getOrCreate();
Dataset<Row> df = spark.read()
.option("header", "true")
.option("delimiter","\t")
.csv("hdfs://127.0.0.1:9000/data/meow_data.csv");
df.show();
}
}
And maven file pom.xml
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>com.meow.meowingful</groupId>
<artifactId>meowingful</artifactId>
<version>1.0-SNAPSHOT</version>
<packaging>jar</packaging>
<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
<maven.compiler.source>1.8</maven.compiler.source>
<maven.compiler.target>1.8</maven.compiler.target>
</properties>
<dependencies>
<!-- https://mvnrepository.com/artifact/org.apache.spark/spark-core_2.11 -->
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-core_2.11</artifactId>
<version>2.2.0</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-sql_2.11</artifactId>
<version>2.2.0</version>
</dependency>
</dependencies>
</project>
I'm the author of uniVocity-parsers and can't help you much with spark, but I believe something like this can work for you:
parserSettings.setHeaderExtractionEnabled(true); //captures the header row
parserSettings.setProcessor(new AbstractRowProcessor(){
#Override
public void rowProcessed(String[] row, ParsingContext context) {
String[] headers = context.headers() //not sure if you need them
JavaPairRDD<String, MyObject> myObjectRDD = javaSparkContext
.mapToPair(row -> new Tuple2<>(row[0], myObjectFromArray(row)));
//process your stuff.
}
});
If you want to paralellize the processing of each row, you can wrap a ConcurrentRowProcessor:
parserSettings.setProcessor(new ConcurrentRowProcessor(new AbstractRowProcessor(){
#Override
public void rowProcessed(String[] row, ParsingContext context) {
String[] headers = context.headers() //not sure if you need them
JavaPairRDD<String, MyObject> myObjectRDD = javaSparkContext
.mapToPair(row -> new Tuple2<>(row[0], myObjectFromArray(row)));
//process your stuff.
}
}, 1000)); //1000 rows loaded in memory.
Then just call to parse:
new TsvParser(parserSettings).parse(myFile);
Hope this helps!
This is my java project strucutre
src/main/java
|_LoadXml.java
src/main/resources/
|_config.xml
src/test/java
src/test/resources
I want to load the following xml file using apache-common configuration library.
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE properties SYSTEM "http://java.sun.com/dtd/properties.dtd">
<properties>
<comment>Here are some favorites</comment>
<entry key="favoriteSeason">summer</entry>
<entry key="favoriteFruit">pomegranate</entry>
<entry key="favoriteDay">today</entry>
</properties>
I have written the following code snippet for LoadXml.java
public static void configure() {
try {
XMLConfiguration config = new XMLConfiguration("config.xml");
node = config.getRootElementName();
} catch (ConfigurationException e) {
e.printStackTrace();
}
return;
}
I want to load xml keys and values into a map with hierarchy nodes seperated by a "."(dot). It would be greatly helpful if someone can help me in this regard.
Load xml keys and values into a Map:
public static Map<String, String> parseConfig() throws ConfigurationException {
XMLConfiguration config = new XMLConfiguration("config.xml");
NodeList list = config.getDocument().getElementsByTagName("entry");
Map<String, String> map = new HashMap<String, String>();
for (int i = 0; i < list.getLength(); i++) {
Node node = list.item(i);
String key = node.getAttributes().getNamedItem("key").getTextContent();
String val = node.getTextContent();
map.put(key, val);
}
System.out.println(map);
return map;
}
OUTPUT:
{favoriteSeason=summer, favoriteFruit=pomegranate, favoriteDay=today}
Just use the config.getRootNode() and then node.getChildren("entry")
XMLConfiguration config = new XMLConfiguration("_config.xml");
Map<String, String> configMap = new HashMap<String, String>();
ConfigurationNode node = config.getRootNode();
for (ConfigurationNode c : node.getChildren("entry"))
{
String key = (String)c.getAttribute(0).getValue();
String value = (String)c.getValue();
configMap.put(key, value);
}
Then you can just do:
System.out.println(configMap.get("favoriteSeason")); // prints: summer
I was asked to include math expression inside a string
say: "price: ${price}, tax: ${price}*${tax)"
the string is given at run-time and a Map values too
I used Velocity for this:
maven:
<properties>
<velocity.version>1.6.2</velocity.version>
<velocity.tools.version>2.0</velocity.tools.version>
</properties>
<dependencies>
<dependency>
<groupId>org.apache.velocity</groupId>
<artifactId>velocity</artifactId>
<version>${velocity.version}</version>
</dependency>
<dependency>
<groupId>org.apache.velocity</groupId>
<artifactId>velocity-tools</artifactId>
<version>${velocity.tools.version}</version>
</dependency>
</dependencies>
java:
public class VelocityUtils {
public static String mergeTemplateIntoString(String template, Map<String,String> model)
{
try
{
final VelocityEngine ve = new VelocityEngine();
ve.init();
final VelocityContext context = new VelocityContext();
context.put("math", new MathTool());
context.put("number", new NumberTool());
for (final Map.Entry<String, String> entry : model.entrySet())
{
final String macroName = entry.getKey();
context.put(macroName, entry.getValue());
}
final StringWriter wr = new StringWriter();
final String logStr = "";
ve.evaluate(context, wr, logStr,template);
return wr.toString();
} catch(Exception e)
{
return "";
}
}
}
test class:
public class VelocityUtilsTest
{
#Test
public void testMergeTemplateIntoString() throws Exception
{
Map<String,String> model = new HashMap<>();
model.put("price","100");
model.put("tax","22");
String parsedString = VelocityUtils.mergeTemplateIntoString("price: ${price} tax: ${tax}",model);
assertEquals("price: 100 tax: 22",parsedString);
String parsedStringWithMath = VelocityUtils.mergeTemplateIntoString("price: $number.integer($math.div($price,2))",model);
assertEquals("price: 50",parsedStringWithMath);
}
}
would it be better to use SPel instead?
I agree that this is kind of off topic, but I think it merits an answer nonetheless.
The whole idea of using a templating engine is that you need to have access to the templates at runtime. If that is the case, then sure, Velocity is a good choice. Then you could provide new versions of the HTML, and assuming you didn't change the variables that were used, you would not have to provide a new version of the application itself (recompiled).
However, if you are just using Velocity to save yourself time, it's not saving much here: you could do this with the StringTokenizer in only a few more lines of code.