Abstracting duplicate lines of code in lambda functions in Java - java

I have 2 lambda functions which contains some duplicated lines of code as explained below in organizeBooksFunc and organizeToysFunc functions. The rest of the lines of code are different. In both case the values path , value and organizedValue are used in the rest of the code as well. Is there a way abstract the duplicate lines of code in both these functions?
public interface OrganizerFunc {
void organize(List<String> orgPath, Schema schema, Value value);
}
public static void organizeBooks(){
OrganizerFunc organizeBooksFunc = (orgPath, schema, value) -> {
if(schema.isOrganizable()) {
path = orgPath.get();
value = PREFIX + value.get();
organizedValue = callOrganizer(path, value);
// Rest of the code below
}
}
}
public static void organizeToys(){
OrganizerFunc organizeToysFunc = (orgPath, schema, value) -> {
if(schema.isOrganizable()) {
path = orgPath.get();
value = PREFIX + value.get();
organizedValue = callOrganizer(path, value);
// Rest of the code below
}
}
}

What about a class like the following:
#Data
public class SharedStuff {
private String path:
private String value;
private String organizedValue;
public SharedStuff(String orgPath, String schema, String value) {
this.path = orgPath.get();
this.value = PREFIX + value.get();
this.organizedValue = callOrganizer(path, value);
}
Then use like this:
public static void organizeBooks(){
OrganizerFunc organizeBooksFunc = (orgPath, schema, value) -> {
if(schema.isOrganizable()) {
SharedStuff sharedSuff = new SharedStuff(orgPath, schema, value);
// Rest of the code below
sharedStuff.getPath();
}
}
}

Related

How to use Java stream map with filters and optional?

Basically what I want to do is following code(I will get system param and will check if it is not null then if the current system on the code is not equal to that then set the dbName as parameter)
if (Objects.nonNull(query.getSystem())) {
if (!query.getSystem().equals(dbContextHolder.getCurrentDb().toString())) {
dbContextHolder.setCurrentDb(Enum.valueOf(DbTypeEnum.class, query.getSystem()));
}
}
I also want if the currentDb system is null then return null. What I try to do is something like
var res = Optional.ofNullable(dbContextHolder.getCurrentDb().toString())
.map(String::toString)
.filter(s -> !s.equals(dbType))
.orElse(Optional.ofNullable(dbType).orElse(null));
But as you see it is wrong and not working. How can I achieve that if parameter dbType is not equal to getCurrentDb then call the method setDbType(paramDbType) if they are equal then return one of them if currentDb is null then return null.
By reducing your problem down I just realized that you always want the value of query.getSystem() to be the context, therefore:
I reduced your code like this:
MockDbTypeEnum newMethod(MockQuery query, MockDbContextHolder dbContextHolder) {
return Optional
.ofNullable(query.getSystem())
.map(MockDbTypeEnum::valueOf)
.orElse(null);
}
MockDbTypeEnum oldMethod(MockQuery query, MockDbContextHolder dbContextHolder) {
if (Objects.nonNull(query.getSystem())) {
if (!query.getSystem().equals(dbContextHolder.getCurrentDb().toString())) {
dbContextHolder.setCurrentDb(Enum.valueOf(MockDbTypeEnum.class, query.getSystem()));
}
return dbContextHolder.getCurrentDb();
}
return null;
}
Also here are the mocks and tests I used to prove these methods are functionally the same for your purposes:
#ParameterizedTest
#CsvSource(value = {
"PSQL, PSQL, PSQL",
"PSQL, SQL, PSQL",
"SQL, SQL, SQL",
"SQL, PSQL, SQL",
"null, SQL, null",
"null, PSQL, null"
}, nullValues = {"null"})
void test(String system, MockDbTypeEnum currentDb, MockDbTypeEnum expectedResult) {
MockQuery query = new MockQuery(system);
MockDbContextHolder dbContextHolder = new MockDbContextHolder(currentDb);
MockDbTypeEnum result = oldMethod(query, dbContextHolder);
assertEquals(expectedResult, result);
MockDbTypeEnum newResult = newMethod(query, dbContextHolder);
assertEquals(expectedResult, newResult);
}
enum MockDbTypeEnum {
PSQL,
SQL
}
static class MockQuery {
private final String system;
public MockQuery(String system) {
this.system = system;
}
public String getSystem() {
return system;
}
}
static class MockDbContextHolder {
private MockDbTypeEnum currentDb;
public MockDbContextHolder(MockDbTypeEnum currentDb) {
this.currentDb = currentDb;
}
public MockDbTypeEnum getCurrentDb() {
return currentDb;
}
public void setCurrentDb(MockDbTypeEnum currentDb) {
this.currentDb = currentDb;
}
}
Here is the result:

Return a map from Golang to Java using Native Library, by passing string as an input from Java

I'm trying to call a Golang function from a Java program. Reference being used. And thats working perfectly fine.
With that, I'm trying to return a map from golang to java. With the return type in Java being Object/Map nothing worked. In fact the exception(maybe compile time) is:
Exception in thread "main" java.lang.IllegalArgumentException: Unsupported return type interface java.util.Map in function GetGeoLoc
at com.sun.jna.Function.invoke(Function.java:490)
at com.sun.jna.Function.invoke(Function.java:360)
at com.sun.jna.Library$Handler.invoke(Library.java:244)
at com.sun.proxy.$Proxy0.GetGeoLoc(Unknown Source)
at com.java.readers.GetGeoLoc2Tests.getGeoLoc2(GetGeoLoc2Tests.java:39)
at com.java.readers.GetGeoLoc2Tests.main(GetGeoLoc2Tests.java:52)
Well someone help me with the wayout. I believe I'm missing something here.
Here's the code
Java:
public class GetGeoLoc2Tests {
static GoCaller GO_CQM_TRANSFORMER;
public interface GoCaller extends Library {
public Map<String, String> GetGeoLoc(GoString.ByValue ip);
public int Add(int i);
public String SString(GoString.ByValue s);
public class GoString extends Structure {
public static class ByValue extends GoString implements Structure.ByValue {
}
public String ip;
public long n;
protected List getFieldOrder() {
return Arrays.asList(new String[] { "ip", "n" });
}
}
}
public static void getGeoLoc2(String ip) {
if (ip == null) {
return;
}
GoCaller.GoString.ByValue str = new GoCaller.GoString.ByValue();
str.ip = ip;
str.n = str.ip.length();
System.out.println("JAVA----->" + ip);
Map<String, String> map = GO_CQM_TRANSFORMER.GetGeoLoc(str);
System.out.println(map);
}
public static void main(String[] args) {
if(args.length == 0) {
System.out.println("Expecting an input<IpAddress>");
System.exit(0);
}
String pwd = System.getProperty("user.home");
String lib = pwd + "/go/src/ip2loc/iploc.so";
GO_CQM_TRANSFORMER = (GoCaller) Native.loadLibrary(lib, GoCaller.class);
// System.out.println(GO_CQM_TRANSFORMER.Add(10));
getGeoLoc2(args[0]);
}
Golang:
package main
import "C"
import (
"fmt"
"github.com/ip2location/ip2location-go"
)
func main(){}
//export GetGeoLoc
func GetGeoLoc(ip string) string {
ip2location.Open("/data/md0/ip2loc/ip6/IPV6-COUNTRY-REGION-CITY-LATITUDE-LONGITUDE-ZIPCODE-TIMEZONE-ISP-DOMAIN-NETSPEED-AREACODE-WEATHER-MOBILE-ELEVATION-USAGETYPE.BIN")
//ip1 := "8.8.8.8"
fmt.Println("input:" , ip)
results := ip2location.Get_all(ip)
m := make(map[string]string)
m["country_short"] = results.Country_short
m["country_long"] = results.Country_long
m["region"] = results.Region
m["city"] = results.City
m["isp"] = results.Isp
m["latitude"] = fmt.Sprintf("%f", results.Latitude)
m["longitude"] = fmt.Sprintf("%f", results.Longitude)
m["domain"] = results.Domain
m["zipcode"] = results.Zipcode
m["timezone"] = results.Timezone
m["netspeed"] = results.Netspeed
m["iddcode"] = results.Iddcode
m["areacode"] = results.Areacode
m["weatherstationcode"] = results.Weatherstationcode
m["weatherstationaname"] = results.Weatherstationname
m["mcc"] = results.Mcc
m["mnc"] = results.Mnc
m["mobilebrand"] = results.Mobilebrand
m["elevation"] = fmt.Sprintf("%f", results.Elevation)
m["usagetype"] = results.Usagetype
fmt.Println("map:", m)
ip2location.Close()
return m
On execution, is printing the output 'm', which is a map. Question is how do i return this map m to a Java method as a HashMap? Thanks

How to read AvroFile into Tuple Class with Java in Flink

I'm Trying to read an Avro file and perform some operations on it, everything works fine but the aggregation functions, when I use them it get the below exception :
aggregating on field positions is only possible on tuple data types
then I change my class to implement Tuple4 (as I have 4 fields) but then when I want to collect the results get AvroTypeException Unknown Type : T0
Here are my data and job classes :
public class Nation{
public Integer N_NATIONKEY;
public String N_NAME;
public Integer N_REGIONKEY;
public String N_COMMENT;
public Integer getN_NATIONKEY() {
return N_NATIONKEY;
}
public void setN_NATIONKEY(Integer n_NATIONKEY) {
N_NATIONKEY = n_NATIONKEY;
}
public String getN_NAME() {
return N_NAME;
}
public void setN_NAME(String n_NAME) {
N_NAME = n_NAME;
}
public Integer getN_REGIONKEY() {
return N_REGIONKEY;
}
public void setN_REGIONKEY(Integer n_REGIONKEY) {
N_REGIONKEY = n_REGIONKEY;
}
public String getN_COMMENT() {
return N_COMMENT;
}
public void setN_COMMENT(String n_COMMENT) {
N_COMMENT = n_COMMENT;
}
public Nation() {
}
public static void main(String[] args) throws Exception {
Configuration parameters = new Configuration();
final ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
Path path2 = new Path("/Users/violet/Desktop/nation.avro");
AvroInputFormat<Nation> format = new AvroInputFormat<Nation>(path2,Nation.class);
format.configure(parameters);
DataSet<Nation> nation = env.createInput(format);
nation.aggregate(Aggregations.SUM,0);
JobExecutionResult res = env.execute();
}
and here's the tuple class and the same code for the job as above:
public class NationTuple extends Tuple4<Integer,String,Integer,String> {
Integer N_NATIONKEY(){ return this.f0;}
String N_NAME(){return this.f1;}
Integer N_REGIONKEY(){ return this.f2;}
String N_COMMENT(){ return this.f3;}
}
I tried with this class and got the TypeException (Used NationTuple everywhere instead of Nation)
I don't think having your class implementing Tuple4 is right way to go. Instead you should add to your topology a MapFunction that converts your NationTuple to Tuple4.
static Tuple4<Integer, String, Integer, String> toTuple(Nation nation) {
return Tuple4.of(nation.N_NATIONKEY, ...);
}
And then in your topology call:
inputData.map(p -> toTuple(p)).returns(new TypeHint<Tuple4<Integer, String, Integer, String>(){});
The only subtle part is that you need to provide a type hint so flink can figure out what kind of tuple your function returns.
Another solution is to use field names instead of tuple field indices when doing your aggregation. For example:
groupBy("N_NATIONKEY", "N_REGIONKEY")
This is all explained here: https://ci.apache.org/projects/flink/flink-docs-stable/dev/api_concepts.html#specifying-keys

Generate XML only to certain depth and limit lists

For a project, which generates XML-files based on a XSD-file, I want to automatically generate the documentation. *
In this documentation I list the different elements defined in the XSD.
And for each element I want to show an example of that element.
The problem is, that the XML-example might be quite long and contains a lot of children.
Therefore I want to shorten the example by:
limiting the shown depth
limiting the amount of elements in a list
For the root-element that example might look like the following:
<root>
<elements>
<element>...<element>
<element>...<element>
<element>...<element>
...
</elements>
</root>
My approach:
To generate classes from the XSD and to generate and validate the XML files I use JAXB.
But I could not figure out how to marshal a Non-Root element.
Therefore I am generating my examples with XStream.
To limit the XML-example I am trying to decorate the PrettyPrintWriter, but that seems to be quite cumbersome.
The two decorators can be seen in my answer.
I just did not expect to care about the internals of a library for such a (common?) feature.
Is there an easier way to do this? (I can also use another library than XStream, or none at all.)
*
My approach is influenced by Spring Auto Rest Docs
To limit the shown depth I created the following XStream WriterWrapper. The class can wrap for example a PrettyPrintWriter and ensures that the wrapped writer only receives the nodes above a given depth threshold.
public class RestrictedPrettyPrintWriter extends WriterWrapper {
private final ConverterLookup converterLookup;
private final int maximalDepth;
private int depth;
public RestrictedPrettyPrintWriter(HierarchicalStreamWriter sw, ConverterLookup converterLookup, int maximalDepth) {
super(sw);
this.converterLookup = converterLookup;
this.maximalDepth = maximalDepth;
}
#Override public void startNode(String name, Class clazz) {
Converter converter = this.converterLookup.lookupConverterForType(clazz);
boolean isSimpleType = converter instanceof SingleValueConverter;
_startNode(name, !isSimpleType);
}
#Override public void startNode(String name) {
_startNode(name, false);
}
#Override public void endNode() {
if (_isLessDeepThanMaximalDepth() || _isMaximalDepthReached()) {
super.endNode();
}
depth--;
}
#Override public void addAttribute(String key, String value) {
if (_isLessDeepThanMaximalDepth() || _isMaximalDepthReached()) {
super.addAttribute(key, value);
}
}
#Override public void setValue(String text) {
if (_isLessDeepThanMaximalDepth() || _isMaximalDepthReached()) {
super.setValue(text);
}
}
/**
* #param name name of the new node
* #param isComplexType indicates if the element is complex or contains a single value
*/
private void _startNode(String name, boolean isComplexType) {
depth++;
if (_isLessDeepThanMaximalDepth()) {
super.startNode(name);
} else if (_isMaximalDepthReached()) {
super.startNode(name);
/*
* set the placeholder value now
* setValue() will never be called for complex types
*/
if (isComplexType) {
super.setValue("...");
}
}
}
private boolean _isMaximalDepthReached() {
return depth == maximalDepth;
}
private boolean _isLessDeepThanMaximalDepth() {
return depth < maximalDepth;
}
}
To limit the lists, I tried, in a first attempt, to modify the XStream CollectionConverter. But this approach was not general enough because implicit lists do not use this converter.
Therefore I created another WriterWrapper which counts the consecutive occurrences of elements with the same name.
public class RestrictedCollectionWriter extends WriterWrapper {
private final int maxConsecutiveOccurences;
private int depth;
/** Contains one element per depth.
* More precisely: the current element and its parents.
*/
private Map < Integer, Elements > elements = new HashMap < > ();
public RestrictedCollectionWriter(HierarchicalStreamWriter sw, int maxConsecutiveOccurences) {
super(sw);
this.maxConsecutiveOccurences = maxConsecutiveOccurences;
}
#Override public void startNode(String name, Class clazz) {
_startNode(name);
}
#Override public void startNode(String name) {
_startNode(name);
}
#Override public void endNode() {
if (_isCurrentElementPrintable()) {
super.endNode();
}
depth--;
}
#Override public void addAttribute(String key, String value) {
if (_isCurrentElementPrintable()) {
super.addAttribute(key, value);
}
}
#Override public void setValue(String text) {
if (_isCurrentElementPrintable()) {
super.setValue(text);
}
}
/**
* #param name name of the new node
*/
private void _startNode(String name) {
depth++;
Elements currentElement = this.elements.getOrDefault(depth, new Elements());
this.elements.put(depth, currentElement);
Elements parent = this.elements.get(depth - 1);
boolean parentPrintable = parent == null ? true : parent.isPrintable();
currentElement.setName(name, parentPrintable);
if (currentElement.isPrintable()) {
super.startNode(name);
}
}
private boolean _isCurrentElementPrintable() {
Elements currentElement = this.elements.get(depth);
return currentElement.isPrintable();
}
/**
* Evaluates if an element is printable or not.
* This is based on the concurrent occurences of the element's name
* and if the parent element is printable or not.
*/
private class Elements {
private String name = "";
private int concurrentOccurences = 0;
private boolean parentPrintable;
public void setName(String name, boolean parentPrintable) {
if (this.name.equals(name)) {
concurrentOccurences++;
} else {
concurrentOccurences = 1;
}
this.name = name;
this.parentPrintable = parentPrintable;
}
public boolean isPrintable() {
return parentPrintable && concurrentOccurences <= maxConsecutiveOccurences;
}
}
}
The following listing shows, how the two classes can be used.
XStream xstream = new XStream(new StaxDriver());
StringWriter sw = new StringWriter();
PrettyPrintWriter pw = new PrettyPrintWriter(sw);
RestrictedCollectionWriter cw = new RestrictedCollectionWriter(pw, 3);
xstream.marshal(objectToMarshal, new RestrictedPrettyPrintWriter(cw, xstream.getConverterLookup(), 3));

Best way to iterate two lists and extract few things?

I have two classes as shown below. I need to use these two classes to extract few things.
public final class ProcessMetadata {
private final String clientId;
private final String deviceId;
// .. lot of other fields here
// getters here
}
public final class ProcMetadata {
private final String deviceId;
private final Schema schema;
// .. lot of other fields here
}
Now I have below code where I am iterating above two classes and extracting schema given a clientId.
public Optional<Schema> getSchema(final String clientId) {
for (ProcessMetadata metadata1 : processMetadataList) {
if (metadata1.getClientId().equalsIgnoreCase(clientId)) {
String deviceId = metadata1.getDeviceId();
for (ProcMetadata metadata2 : procMetadataList) {
if (metadata2.getDeviceId().equalsIgnoreCase(deviceId)) {
return Optional.of(metadata2.getSchema());
}
}
}
}
return Optional.absent();
}
Is there any better way of getting what I need by iterating those two above classes in couple of lines instead of what I have? I am using Java 7.
You're doing a quadratic* search operation, which is inneficient. You can do this operation in constant time by first creating (in linear time) a mapping from id->object for each list. This would look something like this:
// do this once, in the constructor or wherever you create these lists
// even better discard the lists and use the mappings everywhere
Map<String, ProcessMetadata> processMetadataByClientId = new HashMap<>();
for (ProcessMetadata process : processMetadataList) {
processMetadataByClientId.put(process.getClientId(), process);
}
Map<String, ProcMetadata> procMetadataByDeviceId = new HashMap<>();
for (ProcMetadata metadata2 : procMetadataList) {
procMetadataByDeviceId.put(proc.getDeviceId(), proc);
}
Then your lookup simply becomes:
public Optional<Schema> getSchema(String clientId) {
ProcessMetadata process = processMetadataByClientId.get(clientId);
if (process != null) {
ProcMetadata proc = procMetadataByDeviceId.get(process.getDeviceId());
if (proc != null) {
return Optional.of(proc.getSchema());
}
}
return Optional.absent();
}
In Java 8 you could write it like this:
public Optional<Schema> getSchema(String clientId) {
return Optional.fromNullable(processMetadataByClientId.get(clientId))
.map(p -> procMetadataByDeviceId.get(p.getDeviceId()))
.map(p -> p.getSchema());
}
* In practice your algorithm is linear assuming client IDs are unique, but it's still technically O(n^2) because you potentially touch every element of the proc list for every element of the process list. A slight tweak to your algorithm can guarentee linear time (again assuming unique IDs):
public Optional<Schema> getSchema(final String clientId) {
for (ProcessMetadata metadata1 : processMetadataList) {
if (metadata1.getClientId().equalsIgnoreCase(clientId)) {
String deviceId = metadata1.getDeviceId();
for (ProcMetadata metadata2 : procMetadataList) {
if (metadata2.getDeviceId().equalsIgnoreCase(deviceId)) {
return Optional.of(metadata2.getSchema());
}
}
// adding a break here ensures the search doesn't become quadratic
break;
}
}
return Optional.absent();
}
Though of course using maps ensures constant-time, which is far better.
I wondered what could be done with Guava, and accidentally wrote this hot mess.
import static com.google.common.collect.Iterables.tryFind
public Optional<Schema> getSchema(final String clientId) {
Optional<String> deviceId = findDeviceIdByClientId(clientId);
return deviceId.isPresent() ? findSchemaByDeviceId(deviceId.get()) : Optional.absent();
}
public Optional<String> findDeviceIdByClientId(String clientId) {
return tryFind(processMetadataList, new ClientIdPredicate(clientId))
.transform(new Function<ProcessMetadata, String>() {
String apply(ProcessMetadata processMetadata) {
return processMetadata.getDeviceId();
}
});
}
public Optional<Schema> findSchemaByDeviceId(String deviceId) {
return tryFind(procMetadataList, new DeviceIdPredicate(deviceId.get())
.transform(new Function<ProcMetadata, Schema>() {
Schema apply(ProcMetadata procMetadata) {
return processMetadata.getSchema();
}
});
}
class DeviceIdPredicate implements Predicate<ProcMetadata> {
private String deviceId;
public DeviceIdPredicate(String deviceId) {
this.deviceId = deviceId;
}
#Override
public boolean apply(ProcMetadata metadata2) {
return metadata2.getDeviceId().equalsIgnoreCase(deviceId)
}
}
class ClientIdPredicate implements Predicate<ProcessMetadata> {
private String clientId;
public ClientIdPredicate(String clientId) {
this.clientId = clientId;
}
#Override
public boolean apply(ProcessMetadata metadata1) {
return metadata1.getClientId().equalsIgnoreCase(clientId);
}
}
Sorry.

Categories

Resources