I'm new to Vaadin and trying to understand how to make View to get several parameters from URL.
For example
http://www.some.com/book/18/page/41
Numbers 18 and 41 are parameters.
I've found that I can implement HasUrlParameter<T> and then use setParameter method, but it can be used only for one parameter.
Are you using #WildcardParameter in your setParameter method? Wildcard URL parameters
Assuming that greet (The book in your case) is the route, then the code below sets 18\page\41. Since it's a string you would need to parse it and extract values you need, but the value is there.
#Route("greet")
public class WildcardGreeting extends Div
implements HasUrlParameter<String> {
#Override
public void setParameter(BeforeEvent event,
#WildcardParameter String parameter) {
if (parameter.isEmpty()) {
setText("Welcome anonymous.");
} else {
setText(String.format(
"Handling parameter %s.",
parameter));
}
}
}
P.S. Not related to the question, but looking at your URL, could it be that query parameters suit you better Query parameters?
There is no built-in suppor for having multiple parameters for Java views in Vaadin. What you can do is to annotate the parameter with #WildcardParameter so that multiple path segments can be captured into one parameter. You would then have to manually manage the contents of that value - concatenating strings when generating URLs and parsing strings in setParameter.
Support for multiple parameters is being worked on right now, but the work is not yet completed. It is not yet clear which future version of Vaadin will get this feature, but my guess right now is that it would be either version 14.3 or 14.4.
It seems like Vaadin 14 has got an update and got support for multiple path parameters.
Example:
#Route("user/:userID/:messageID/edit")
public class UserProfileEdit extends Div implements BeforeEnterObserver {
private String userID;
private String messageID;
#Override
public void beforeEnter(BeforeEnterEvent event) {
userID = event.getRouteParameters().get("userID").get();
messageID = event.getRouteParameters().get("messageID").get();
}
}
Source: https://vaadin.com/docs/v14/flow/routing/tutorial-router-templates
A simple example with the solution
#Route("book")
public class BookView extends Div implements HasUrlParameter<String> {
#Override
public void setParameter(BeforeEvent event, #WildcardParameter String parameter) {
if (!parameter.isEmpty()) {
String params[] = parameter.split("/");
if (params.length == 1) {
// Do something ..
} else if (params.length == 2) {
// Do another thing ..
} else {
// Do something else
}
}
}
}
The link can be created like this:
new RouterLink("No params", BookView.class);
new RouterLink("One param", BookView.class, "18");
new RouterLink("Two param", BookView.class, "18/edit");
Related
I would like to create an array_agg UDF for Apache Drill to be able to aggregate all values of a group to a list of values.
This should work with any major types (required, optional) and minor types (varchar, dict, map, int, etc.)
However, I get the impression that Apache Drill's UDF API does not really make use of inheritance and generics. Each type has its own writer and handler, and they cannot be abstracted to handle any type. E.g., the ValueHolder interface seems to be purely cosmetic and cannot be used to have type-agnostic hooking of UDFs to any type.
My current implementation
I tried to solve this by using Java's reflection so I could use the ListHolder's write function independent of the holder of the original value.
However, I then ran into the limitations of the #FunctionTemplate annotation.
I cannot create a general UDF annotation for any value (I tried it with the interface ValueHolder: #param ValueHolder input.
So to me it seems like the only way to support different types to have separate classes for each type. But I can't even abstract much and work on any #Param input, because input is only visible in the class where its defined (i.e. type specific).
I based my implementation on https://issues.apache.org/jira/browse/DRILL-6963
and created the following two classes for required and optional varchars (how can this be unified in the first place?)
#FunctionTemplate(
name = "array_agg",
scope = FunctionScope.POINT_AGGREGATE,
nulls = NullHandling.INTERNAL
)
public static class VarChar_Agg implements DrillAggFunc {
#Param org.apache.drill.exec.expr.holders.VarCharHolder input;
#Workspace ObjectHolder agg;
#Output org.apache.drill.exec.vector.complex.writer.BaseWriter.ComplexWriter out;
#Override
public void setup() {
agg = new ObjectHolder();
}
#Override
public void reset() {
agg = new ObjectHolder();
}
#Override public void add() {
if (agg.obj == null) {
// Initialise list object for output
agg.obj = out.rootAsList();
}
org.apache.drill.exec.vector.complex.writer.BaseWriter.ListWriter listWriter =
(org.apache.drill.exec.vector.complex.writer.BaseWriter.ListWriter)agg.obj;
listWriter.varChar().write(input);
}
#Override
public void output() {
((org.apache.drill.exec.vector.complex.writer.BaseWriter.ListWriter)agg.obj).endList();
}
}
#FunctionTemplate(
name = "array_agg",
scope = FunctionScope.POINT_AGGREGATE,
nulls = NullHandling.INTERNAL
)
public static class NullableVarChar_Agg implements DrillAggFunc {
#Param NullableVarCharHolder input;
#Workspace ObjectHolder agg;
#Output org.apache.drill.exec.vector.complex.writer.BaseWriter.ComplexWriter out;
#Override
public void setup() {
agg = new ObjectHolder();
}
#Override
public void reset() {
agg = new ObjectHolder();
}
#Override public void add() {
if (agg.obj == null) {
// Initialise list object for output
agg.obj = out.rootAsList();
}
if (input.isSet != 1) {
return;
}
org.apache.drill.exec.vector.complex.writer.BaseWriter.ListWriter listWriter =
(org.apache.drill.exec.vector.complex.writer.BaseWriter.ListWriter)agg.obj;
org.apache.drill.exec.expr.holders.VarCharHolder outHolder = new org.apache.drill.exec.expr.holders.VarCharHolder();
outHolder.start = input.start;
outHolder.end = input.end;
outHolder.buffer = input.buffer;
listWriter.varChar().write(outHolder);
}
#Override
public void output() {
((org.apache.drill.exec.vector.complex.writer.BaseWriter.ListWriter)agg.obj).endList();
}
}
Interestingly, I can't import org.apache.drill.exec.vector.complex.writer.BaseWriter to make the whole thing easier because then Apache Drill would not find it.
So I have to put the entire package path for everything in org.apache.drill.exec.vector.complex.writer in the code.
Furthermore, I'm using the depcreated ObjectHolder. Any better solution?
Anyway: These work so far, e.g. with this query:
SELECT
MIN(tbl.`timestamp`) AS start_view,
MAX(tbl.`timestamp`) AS end_view,
array_agg(tbl.eventLabel) AS label_agg
FROM `dfs.root`.`/path/to/avro/folder` AS tbl
WHERE tbl.data.slug IS NOT NULL
GROUP BY tbl.data.slug
however, when I use ORDER BY, I get this:
org.apache.drill.common.exceptions.UserRemoteException: SYSTEM ERROR: UnsupportedOperationException: NULL
Fragment 0:0
Additionally, I tried more complex types, namely maps/dicts.
Interestingly, when I call SELECT sqlTypeOf(tbl.data) FROM tbl, I get MAP.
But when I write UDFs, the query planner complains about having no UDF array_agg for type dict.
Anyway, I wrote a version for dicts:
#FunctionTemplate(
name = "array_agg",
scope = FunctionScope.POINT_AGGREGATE,
nulls = NullHandling.INTERNAL
)
public static class Map_Agg implements DrillAggFunc {
#Param MapHolder input;
#Workspace ObjectHolder agg;
#Output org.apache.drill.exec.vector.complex.writer.BaseWriter.ComplexWriter out;
#Override
public void setup() {
agg = new ObjectHolder();
}
#Override
public void reset() {
agg = new ObjectHolder();
}
#Override public void add() {
if (agg.obj == null) {
// Initialise list object for output
agg.obj = out.rootAsList();
}
org.apache.drill.exec.vector.complex.writer.BaseWriter.ListWriter listWriter =
(org.apache.drill.exec.vector.complex.writer.BaseWriter.ListWriter) agg.obj;
//listWriter.copyReader(input.reader);
input.reader.copyAsValue(listWriter);
}
#Override
public void output() {
((org.apache.drill.exec.vector.complex.writer.BaseWriter.ListWriter)agg.obj).endList();
}
}
#FunctionTemplate(
name = "array_agg",
scope = FunctionScope.POINT_AGGREGATE,
nulls = NullHandling.INTERNAL
)
public static class Dict_agg implements DrillAggFunc {
#Param DictHolder input;
#Workspace ObjectHolder agg;
#Output org.apache.drill.exec.vector.complex.writer.BaseWriter.ComplexWriter out;
#Override
public void setup() {
agg = new ObjectHolder();
}
#Override
public void reset() {
agg = new ObjectHolder();
}
#Override public void add() {
if (agg.obj == null) {
// Initialise list object for output
agg.obj = out.rootAsList();
}
org.apache.drill.exec.vector.complex.writer.BaseWriter.ListWriter listWriter =
(org.apache.drill.exec.vector.complex.writer.BaseWriter.ListWriter) agg.obj;
//listWriter.copyReader(input.reader);
input.reader.copyAsValue(listWriter);
}
#Override
public void output() {
((org.apache.drill.exec.vector.complex.writer.BaseWriter.ListWriter)agg.obj).endList();
}
}
But here, I get an empty list in the field data_agg for my query:
SELECT
MIN(tbl.`timestamp`) AS start_view,
MAX(tbl.`timestamp`) AS end_view,
array_agg(tbl.data) AS data_agg
FROM `dfs.root`.`/path/to/avro/folder` AS tbl
GROUP BY tbl.data.viewSlag
Summary of questions
Most importantly: How do I create an array_agg UDF for Apache Drill?
How to make UDFs type-agnostic/general purpose? Do I really have to implement an entire class for each Nullable, Required and Repeated version of all types? That's a lot to do and quite tedious. Isn't there a way to handle values in an UDF agnostic to the underlying types?
I wish Apache Drill would just use what Java offers here with function generic types, specialised function overloading and inheritence of their own type system. Am I missing something on how to do that?
How can I fix the NULL problem when I use ORDER BY on my varchar version of the aggregate?
How can I fix the problem where my aggregate of maps/dicts is an empty list?
Is there an alternative to using the deprecated ObjectHolder?
To answer your question, unfortunately you've run into one of the limits of the Drill Aggregate UDF API which is that it can only return simple data types.1 It would be a great improvement to Drill to fix this, but that is the current status. If you're interested in discussing that further, please start a thread on the Drill user group and/or slack channel. I don't think it is impossible, but it would require some modification to the Drill internals. IMHO it would be well worth it because there are a few other UDFs that I'd like to implement that need this feature.
The second part of your question is how to make UDFs type agnostic and once again... you've found yet another bit of ugliness in the UDF API. :-) If you do some digging in the codebase, you'll see that most of the Math functions have versions that accept FLOAT, INT etc..
Regarding the aggregate of null or empty lists. I actually have some good news here... The current way of doing that is to provide two versions of the function, one which accepts regular holders and the second which accepts nullable holders and returns an empty list or map if the inputs are null. Yes, this sucks, but the additional good news is that I'm working on cleaning this up and hopefully will have a PR submitted soon that will eliminate the need to do this.
Regarding the ObjectHolder, I wrote a median function that uses a few Stacks to compute a streaming median and I used the ObjectHolder for that. I think it will be with us for some time as there is no alternative at the moment.
I hope this answers your questions.
I've saw a video where is possible to set named locators for allure report
to get view $(locatorname).click - passed:
There is code:
public class Named extends NamedBy {
private final By origin;
private String name;
public Named(By origin) {
this.origin = origin;
}
public Named as(String name) {
this.name = name;
}
#Override
public String toString() {
return Objects.nonNull(name) ? name : this.origin.toString();
}
#Override
public List<WebElement> findElements(SearchContext context) {
return new Named(By.id(id));
}
}
And code for elements:
SelenideElement button = $(id("someid").**as("locatorName")**)
and then should be possible to work with this element.
But i can't.
I dont have method as when i try to create selenideElement.
Pls help. such report is mush more readble.
video URL: https://youtu.be/d5gjK6hZHE4?t=1300
Your example doesn't seem to be valid. At least, a method as must return this. Moreover, id in the overridden findElements is missing. Plus, it's not really clear why you extend NamedBy instead of By.
Anyway, that's just a wrapper around By. To see those locators' names in report you have to follow a previous example in a video first (event listener), before completing NamedBy implementation.
P.S. To make it works the same way as was introduced in the code snippet, you have to add an additional creational logic, e.g.:
public static NamedBy id(String locator) {
return new NamedBy(By.id(locator));
}
Prerequisites:
String command = "x";
Data data = request.get();
interface Action {
Response process(Data data);
}
class ActionX implements Action {
public Response process(Data data) {}
}
class Service {
public execute(Action action) {
action.process();
}
}
I dont't understand how to register my actions. The following options are unacceptable:
// bad because too verbose
case "x":
action = new ActionX();
and
// bad because package name is a constant string
action = Class.forName("some.package.name.Action" + command.toUpperCase());
I guess I might try to use Java annotations to solve my problem. Something like this:
#Action(command = "x")
class ActionX implements Action {}
// scan whole classpath etc ...
But maybe I just need to use another pattern....
According to me, Factory design pattern (or Abstract factory design pattern) is suitable in this case. I might be wrong.
Annotation based way is overkill for this situation
Switch case is not verbose at all.
You can try following way with factory pattern,
enum Command {
X("x");
private String commandString;
Command(String commandString) {
this.commandString = commandString;
}
public String getCommandString() {
return commandString;
}
}
interface Action {
void process();
}
class ActionX implements Action {
#Override
public void process() {
System.out.println("Processing..");
}
}
class ActionFactory {
public Action getAction(Command command) {
// Check command and return action
// Switch is the best suitable here
return action;
}
}
Thinking about the situation where you have lots of commands and so many actions.... !? In that case, you must look in to your design first and some sort of more analysis can be done instead of deciding about design patterns first.
Do not use the Class.forName(""). I would go for the switch case but maybe change the command representation to an enum instead of a String.
You can use a map:
Map<String, Action> map = ...;
map.put("x", new ActionX());
Response response = map.get("x").process(data);
It looks like the command name is an inherent part of the command (as opposed to being configured externally). If that's the case, model it that way:
interface Action {
String name();
Response process(Data data);
}
Then you can simply create a Map<String,Action> that uses each action's name as the key.
If you want to make command classes discoverable instead of hard-coding them and you're not using an existing scanner like Spring, you should use the Service Provider Interface.
The external route hits this method in a controller:
public static void externalRouteHit() {
Map<String, String> myParams = request.params.allSimple();
redirectedRoute(myParams);
}
Then, I try and pass the Map to another method in same controller, but it is null.
public static void redirectedRoute(Map<String, String> myParams) {
if (myParams == null)
Logger.info("WTF");
}
I can pass a string or boolean fine. What am I doing wrong?
If you want to call another public static void method from one of your controllers without play creating a redirect, you'll have to annotate the method with #Util
Example:
public class MyController extends Controller {
public static void index(){
Map xyz = ....;
helperMethod(xyz);
}
#Util
public static void helperMethod(Map map){
/// do stuff
}
}
According to http://www.playframework.org/documentation/1.2.4/controllers, when a Play action handles a Map parameter, it expects a specific format for the query parameters:
Play also handles the special case of binding a Map
like this:
public static void show(Map client) {
…
}
A query string like the following:
?client.name=John&client.phone=111-1111&client.phone=222-2222
would bind the client variable to a map with two elements.
The first element with key name and value John, and the second
with key phone and value 111-1111, 222-2222.
In other words, you have to use specially formatted, named query parameters. What you want is instead to pass along all the query parameters.
Here's a working example. It seems verbose, but it works. Try hitting /application/externalRouteHit?color=red&size=XS.
public class Application extends Controller {
public static void externalRouteHit() {
Map<String, Object> myParams = new HashMap<String, Object>();
for (String key : params.allSimple().keySet()) {
if (!key.equals("body")) {
myParams.put(key, params.allSimple().get(key));
}
}
redirect(Router.reverse("Application.redirectedRoute", myParams).url);
}
public static void redirectedRoute() {
renderText("color = " + params.get("color") + ", size = " + params.get("size"));
}
}
This is because the redirect is a true HTTP redirect (302), so will produce an HTTP request from the URL to the action name, and the Map will be attempted to convert to form part of the URL.
I guess the conversion from or to the Map is failing
If you want to pass request parameters with redirect, you might want to simply use: params.flash(), it will store all params in cookie and they will be available in called controller (and template) through "flash" variable.
As of Map, it should work, documentation at http://www.playframework.org/documentation/1.2.4/controllers specificly tells:
Play also handles the special case of binding a Map
like this:
public static void show(Map client) {
… }
you might want to check which type is exactly returned by request.params.allSimple(), maybe it requires some special map implementation.
I figured this must be a common question, but I surprisingly couldn't find an answer, so maybe my entire structure is terribad...
I have an activity which downloads values/states of a game from a web service via AsyncTask. These values are used to update a custom view.
Once the view is created, various events from the view launch an AsyncTask to download other information.
This is functional, but the problem is I now have half a dozen AsyncTask classes in the activity with almost identical code. The only difference is the type of object that is returned (which is based on the json from the web service) and the method that is called from onPostExecute().
How can I use just two AsyncTask (one for post and one for get) without knowing what type of json object will be returned by the web service?
In a similar vein, how can I determine the type of object returned by the web service? The web service, if there is a problem, will return a json string that correlates to an ErrorMessage object rather than (for example) a GameData object.
Should I be using switch and instanceof in onPostExecute() somehow? Callbacks maybe?
You can use an abstract base class, which your related classes extends.
Sample code:
public abstract class IBaseObject {
protected String error;
public IBaseObject(String param) {
error = param;
}
public abstract String getError();
}
public class ObjectOne extends IBaseObject {
private String objectParam;
public ObjectOne(String error, String objectSpecificParam) {
super(error);
objectParam = objectSpecificParam;
}
#Override
public String getError() {
return error;
}
}
and for example, use it like this:
private class GetTask extends AsyncTask<String, Void, IBaseObject> {
protected IBaseObject doInBackground(String... url) {
// Get your data.
// Construct your corresponding object given by specific
// parameters from your JSON response.
if (a_parameter_match) {
return new ObjectOne(some_json_params...);
} else {
return new ObjectTwo(some_json_params...);
}
}
protected void onPostExecute(IBaseObject object) {
object.getError(); // Or whatever you need here.
}
}
This is just from the top of my head. I couldn't relate to your specific problem, although the ideas here should be enough to get you started on your new structure.
This is too long for a comment, so I'm writing an answer. However it was the advice of #Pompe de velo that got me on this track, so I am accepting that answer. I also left out some information from my question that could have been useful.
Anyway, as of right now I do not see any major downsides to this approach, but time ( or maybe another SO user ;] ) will tell...
Essentially I have assigned a constant to every type of object that the activity will try to get. The part that I left out was that the server only returns an error object on a 4xx-5xx http status code. In other words, I am certain to either get the object I am expecting or an error object and I can determine which I got from the status code. Then a switch sends the actual json string to the appropriate method that can manipulate the response as necessary.
Simplified pseudocode...
private void getGameData(){
new MyAsyncTask(this, MyAsyncTask.OBJ_GAME_DATA).execute();
}
static class MyAsyncTask extends AsyncTask<String, Integer, String> {
private int outputObjectType;
protected static final int OBJ_GAME_DATA = 0;
protected static final int OBJ_OTHER_DATA = 1;
protected static final int OBJ_DIFFERENT_DATA = 2;
protected static final int OBJ_SERVER_ERROR = 3;
MyAsyncTask(MyActivity activity, int expectedObject){
outputObjectType = expectedObject;
}
doInBackground(){
if(httpStatusCode >= 400){
outputObjectType = MyAsyncTask.OBJ_SERVER_ERROR;
}
return jsonStringFromServer;
}
onPostExecute(String json){
switch(outputObjectType){
case MyAsyncTask.OBJ_SERVER_ERROR:
serverError(json);
break;
case MyAsyncTask.OBJ_GAME_DATA:
processGameData(json);
break;
// ....
}
}
}
private void serverError(String json){
ServerError se = new Gson().fromJson(json, ServerError.class);
Log.d(TAG, se.getErrorMessage());
}
private void processGameData(String json){
GameData gd = new Gson().fromJson(json, GameData.class);
// .......
}
I think this is more less what #Pompe de velo was saying, however I am just making my a_parameter_match based on the status code rather than something within the json.
If this is flawed, I'd love to learn why !