Update the Task model - RuntimeException: DataSource user is null? - java

I started learning Play framework today and it is very good and easy to learn.
I successfully completed the sample provided on their website, but I wanted to make some modifications to it.
I wanted to see if could update the label of a particular task, so I followed the following approach
First I added a route to update the data
POST /tasks/:id/update controllers.Application.updateTask(id: Long)
Then I added the following code to index.scala.html file
#form(routes.Application.updateTask(task.id)) {
<label class="contactLabel">Update note here:</label>
#inputText(taskForm("label")) <br />
}
Then I modified Application.java class to
public static Result updateTask(Long id) {
Form<Task> taskForm = Form.form(Task.class).bindFromRequest();
if (taskForm.hasErrors()) {
return badRequest(views.html.index.render(Task.all(), taskForm));
} else {
Task.update(id, taskForm.get());
return redirect(routes.Application.tasks());
}
}
Finally in Task.java I added this code
public static void update(Long id, Task task) {
find.ref(id).update(task.label);
}
But when I perform the update operation I get this error
[RuntimeException: DataSource user is null?]
Needless to say that I commented out
db.default.driver=org.h2.Driver
db.default.url="jdbc:h2:mem:play"
ebean.default="models.*"
in application.conf since I am already able to save and delete data; but I cannot update the data in the database, why is this happening, did someone try this before, how can I solve this error?

Your update(Long id, Task task) method on Task model should be like below:
public static void update(Long id, Task task) {
task.update(id); // updates this entity, by specifying the entity ID
}
Because you passed task variable as updated data, you don't need to find reference of Task object like you do in find.ref(id). And moreover, update() method on play.db.ebean.Model class (with one parameter) needs ID of model as parameter.
Hope this help solving your problem.. :)

Ensure you have commented out Ebean configuration in application.conf
# Ebean configuration
# ~~~~~
# You can declare as many Ebean servers as you want.
# By convention, the default server is named `default`
#
ebean.default="models.*"

Related

Creating a global transaction Id that is accessible through multiple packages

Hi to all Java experts!
I am working on onboarding a new and shiny process visualization service and I need your help!
My project structure goes like this:
Service Package is dependant on Core Package which is dependant on Util package. Something like this:
Service
|-|- Core
|-|-|- Util
The application package has the main method from where our code begins. It's calling some of the Core methods that is using the Util package for read information from the input.
package com.dummy.service;
public void main(Object input) {
serviceCore.call(input);
}
package com.dummy.core;
public void call(Object input) {
String stringInput = util.readFromInput(input);
//Do stuff
}
package com.dummy.util;
public String readFromInput(Object input) {
//return stuff;
}
The problem starts when I want to onboard to the visualization service. One requirement is to use a unique transaction Id for each call to the service.
My question is - how to share the process Id between all of these methods without doing too much refactoring to the code? To see the entire process in the Process Visualization tool I will have to use the same ID across the entire call. My vision that this is going to be something like:
package com.dummy.service;
public void main(Object input) {
processVisualization.signal(PROCESS_ID, "transaction started");
serviceCore.call(input);
processVisualization.signal(PROCESS_ID, "transaction ended");
}
package com.dummy.core;
public void call(Object input) {
processVisualization.signal(PROCESS_ID, "Method call is invoked");
String stringInput = util.readFromInput(input);
//Do stuff
}
package com.dummy.util;
public String readFromInput(Object input) {
processVisualization.signal(PROCESS_ID, "Reading from input");
//return stuff;
}
I was thinking about the following, but all of these are just abstract ideas that I am not even sure can be implemented. And if yes - then how?
Creating a new package that all of the three packages are going to be dependant on and going to "hold" the process Id for each call. But how? Should I use a static class in this package? A singelton?
I've read this post about ThreadLocal variables: When and how should I use a ThreadLocal variable? but I am not familiar with these and not sure how to implement this idea - should it go to a separate package like I mentioned in 1?
Changing the method's signatures to pass the id as a variable. This is, unfortunately, too pricey in terms of time and the danger of a large refactoring.
Using file writing - save the ID in some file that is accessible throughout the process.
Constructing a unique id from the input - I think this can be the perfect solution but we may receive the same input in separate calls to the service.
Access the JVM for some unique transaction id. I know that when we are logging stuff we have the RequestId printed in the log line. This is the pattern we use in Log4J configuration:
<pattern>%d{dd MMM yyyy HH:mm:ss,SSS} %highlight{[%p]} %X{RequestId} (%t) %c: %m%n</pattern>
This RequestId is a variable on the ThreadContext that is created before the job. Is this possible and/or recommended to access this parameter and use it as a unique transaction id?
In the end we've utilized Log4J's Thread context.
It's probably not the best solution since we are mixing the purpose of the same thing, but this is how we did it:
The process id is extracted like that:
org.apache.logging.log4j.ThreadContext.get("RequestId");
And initiated on the Handler Chain (depends on which service you are using):
ThreadContext.put("RequestId", Objects.toString(job.getId(), (String)null));
This is happening on every job that is received.
Disclaimer: This solution haven't been fully tested yet but this is the direction we go with

Configuring DropWizard Programmatically

I have essentially the same question as here but am hoping to get a less vague, more informative answer.
I'm looking for a way to configure DropWizard programmatically, or at the very least, to be able to tweak configs at runtime. Specifically I have a use case where I'd like to configure metrics in the YAML file to be published with a frequency of, say, 2 minutes. This would be the "normal" default. However, under certain circumstances, I may want to speed that up to, say, every 10 seconds, and then throttle it back to the normal/default.
How can I do this, and not just for the metrics.frequency property, but for any config that might be present inside the YAML config file?
Dropwizard reads the YAML config file and configures all the components only once on startup. Neither the YAML file nor the Configuration object is used ever again. That means there is no direct way to configure on run-time.
It also doesn't provide special interfaces/delegates where you can manipulate the components. However, you can access the objects of the components (usually; if not you can always send a pull request) and configure them manually as you see fit. You may need to read the source code a bit but it's usually easy to navigate.
In the case of metrics.frequency you can see that MetricsFactory class creates ScheduledReporterManager objects per metric type using the frequency setting and doesn't look like you can change them on runtime. But you can probably work around it somehow or even better, modify the code and send a Pull Request to dropwizard community.
Although this feature isn't supported out of the box by dropwizard, you're able to accomplish this fairly easy with the tools they give you. Note that the below solution definitely works on config values you've provided, but it may not work for built in configuration values.
Also note that this doesn't persist the updated config values to the config.yml. However, this would be easy enough to implement yourself simply by writing to the config file from the application. If anyone would like to write this implementation feel free to open a PR on the example project I've linked below.
Code
Start off with a minimal config:
config.yml
myConfigValue: "hello"
And it's corresponding configuration file:
ExampleConfiguration.java
public class ExampleConfiguration extends Configuration {
private String myConfigValue;
public String getMyConfigValue() {
return myConfigValue;
}
public void setMyConfigValue(String value) {
myConfigValue = value;
}
}
Then create a task which updates the config:
UpdateConfigTask.java
public class UpdateConfigTask extends Task {
ExampleConfiguration config;
public UpdateConfigTask(ExampleConfiguration config) {
super("updateconfig");
this.config = config;
}
#Override
public void execute(Map<String, List<String>> parameters, PrintWriter output) {
config.setMyConfigValue("goodbye");
}
}
Also for demonstration purposes, create a resource which allows you to get the config value:
ConfigResource.java
#Path("/config")
public class ConfigResource {
private final ExampleConfiguration config;
public ConfigResource(ExampleConfiguration config) {
this.config = config;
}
#GET
public Response handleGet() {
return Response.ok().entity(config.getMyConfigValue()).build();
}
}
Finally wire everything up in your application:
ExampleApplication.java (exerpt)
environment.jersey().register(new ConfigResource(configuration));
environment.admin().addTask(new UpdateConfigTask(configuration));
Usage
Start up the application then run:
$ curl 'http://localhost:8080/config'
hello
$ curl -X POST 'http://localhost:8081/tasks/updateconfig'
$ curl 'http://localhost:8080/config'
goodbye
How it works
This works simply by passing the same reference to the constructor of ConfigResource.java and UpdateConfigTask.java. If you aren't familiar with the concept see here:
Is Java "pass-by-reference" or "pass-by-value"?
The linked classes above are to a project I've created which demonstrates this as a complete solution. Here's a link to the project:
scottg489/dropwizard-runtime-config-example
Footnote: I haven't verified this works with the built in configuration. However, the dropwizard Configuration class which you need to extend for your own configuration does have various "setters" for internal configuration, but it may not be safe to update those outside of run().
Disclaimer: The project I've linked here was created by me.
I solved this with bytecode manipulation via Javassist
In my case, I wanted to change the "influx" reporter
and modifyInfluxDbReporterFactory should be ran BEFORE dropwizard starts
private static void modifyInfluxDbReporterFactory() throws Exception {
ClassPool cp = ClassPool.getDefault();
CtClass cc = cp.get("com.izettle.metrics.dw.InfluxDbReporterFactory"); // do NOT use InfluxDbReporterFactory.class.getName() as this will force the class into the classloader
CtMethod m = cc.getDeclaredMethod("setTags");
m.insertAfter(
"if (tags.get(\"cloud\") != null) tags.put(\"cloud_host\", tags.get(\"cloud\") + \"_\" + host);tags.put(\"app\", \"sam\");");
cc.toClass();
}

Hadoop Map/Reduce Mapper 'map' method and logs

I've recently been asked to look into speeding up a mapreduce project.
I'm trying to view log4j log information which is being generated within the 'map' method of a class which implements: org.apache.hadoop.mapred.Mapper
Within this class there are the following methods:
#Override
public void configure( .. ) { .. }
public static void doCompileAndAdd( .. ) { .. }
public void map( .. ) { .. }
Logging information is available for the configure method and the doCompileAndAdd method (which is called from the configure method); however, no log information is being displayed for the 'map' method.
I've also tried simply using System.out.println( .. ) within the map method without success.
Is there anyone who might be able to help to shed some light on this issue?
Thanks,
Telax
Since the mapper classes actually run in tasks distributed across nodes in the cluster, the stdout from those tasks appears in the individual logs for each task. The simplest way to see those logs is to go to the job tracker page for the cluster, usually at http://namenode:50030/jobtracker.jsp. From there you can select the job and then select the map tasks you are interested in the logs for.

Sonar web client- Updating a value fails

I am trying to modify an existing manual metric in sonar from externally
supplied value using the web service client.
So far I am able to read the existing metric value from the plugin, but
am having doubts in updating the values.
Also, on updating the metric like
sonar.update(new PropertyUpdateQuery("<metric_key>, "Metric Value"));
Nothing happens, but the javadocs mention about the PUT operation in the UpdateQuery class.
Edit: I have also tried to update the method using this approach :
UpdateQuery<Metric> update = new UpdateQuery<Metric>() {
#Override
public Class<Metric> getModelClass() {
return Metric.class;
}
#Override
public String getUrl() {
return "/drilldown/measures/70?metric=<Metric Key>";
}
};
sonar.update(update);
Is this the correct method of updating a manual metric ?
Also, should the model class and url be something more specific ? - No documentation for this exists so far.
When dealing with the REST API, the best is to visit the following page: http://docs.codehaus.org/display/SONAR/Web+Service+API
There, you can find the available operations on manual measures: get, create and delete. There's no update operation on manual measures.
BTW, the equivalent in the Java Web Service Client are ManualMeasure*Query, not PropertyUpdateQuery which updates Sonar properties.

Hooking custom actions when deleting cascadingly

A simple case. A user has many photos. When a user gets deleted, all of his/her photos should be deleted too (rule of cascades).
I want however to be able to execute some custom code right before every photo is deleted.
Unfortunately, when deleting users, all I am doing is call userDAO.deleteUser(userID), so no specific action is taken on photos (they are deleted by Hibernate itself)
Also, I don't really want the userDAO to have the knowledge that a user has photos, so this custom code should be inserted somewhere else.
I wish it were as simple as giving an OnDelete callback when I annote my entity classes, but I haven't seen any such specification in the Hibernate docs
Then I Think you need to apply SPring AOP on the function which deletes user.
for example:
public void deleteUser(User user){
Session session = sessionFactory.getcurrentSection();
//delete the object
}
What you need to do is to apply #Around advice
#Pointcut("execution(* com.vanilla.dao.*.*(..))")
public void deleteUserMethods() { }
#Around("deleteUserMethods()")
public Object profile(ProceedingJoinPoint pjp) throws Throwable {
Object output = pjp.proceed();
///perform any operations on an pjp and its parameters.
return output;
}
I recommend you to see this example:
http://veerasundar.com/blog/2010/01/spring-aop-example-profiling-method-execution-time-tutorial/
and Spring documentation will be also very helpful:
http://static.springsource.org/spring/docs/3.0.x/spring-framework-reference/html/aop.html#aop-schema

Categories

Resources