I have a Java project with Gradle as build environment.
Simplifying my scenario:
I have a module called Utils and one of its interfaces is TimeProvider which is implemented by either a real time provider (System.currentTimeMillis()) or a logic time provider (incrementing a counter). Another example is instead of sleeping for some time, I can just increment the timestamp. The goal is to run tests independently of the current time. An example use case is to be able to stop at break points without worrying aobut the time to progress and disrupting my tests.
(BTW if you have a betteer way to do do, I would appreciate any insight).
My question is how to force all the parts in the code to use this infrastructure and not the real Java time solutions (Thread.sleep, System.currentTimeMillis, etc.)
I want to prevent a scenario where a programmer accidently writes a code that uses the real Java time instead of my infrastructure.
One option is to use a Security Manager but this is tricky because the error will be catched "some time" during run time.
I prefer a way to catch this during compile time.
I wonder if there is a way, maybe in Gradle to forbid some modules use part of Java code? Maybe there is another way to do so?
Thanks,
Related
I am developing a Java application that reads data from a Redis Database, I use Lettuce library to connect to Redis which in turn uses 'Netty' library to communicate with Redis
I suspect that the execution time of my application is greater than expected, so a conducted a profiling experiment using JProfiler, I was surprised that a FastThreadLocalRunnable takes a significant portion of the execution time with no justification as the tree shows no function calls taking time:
So, is it a bug in Lettuce library?, or is it a problem in the profiler measuring the execution time?
Any help is appreciated
Edit:
Thanks to Ingo's answer I can now expand the tree but it turns out that the java NIO is consuming my processor:
Any idea?
The call tree in JProfiler only shows classes that are included in the call tree filters that you define in the profiling settings:
By default, this excludes a lot of common frameworks and libraries so that you can get started without configuring anything. It is better if you delete these filters and add your own profiled packages here.
In addition to the profiled classes, JProfiler shows the thread entry point even it is not a profiled class, such as io.netty.util.concurrent.FastThreadLocalRunnable. Also, the first call into non-profiled classes is always shown at any level in the call tree.
In your case there are call chains to non-profiled classes below io.netty.util.concurrent.FastThreadLocalRunnable that never call a profiled class. They could belong to some framework or to some part of your code that is not included in the profiled classes. This time has to go somewhere, so it is attributed to the io.netty.util.concurrent.FastThreadLocalRunnable node.
An easy way to check is to disable filtering in the profiling settings, then you see all classes.
More information about call tree filters can be found at
https://www.ej-technologies.com/resources/jprofiler/help/doc/main/methodCallRecording.html
Is there a tool that can trace code execution and generate cut-down Java classes that capture the methods that actually got executed? I'm trying to pull some code out of
a big codebase without including tons of static dependencies. Dynamic dependency
extraction.
For example if method M of class C is never executed, the extracted code would leave that method out, along with everything that method uses and depends upon.
While it would be possible to create such a thing, it would depend on the runs being deterministic. If something changes, you'd get a MethodNotFoundError when the control flow changes due to differences to the original run.
Imagine doing that to some calendar code, and then suddenly it's a leap year and the code that was "optimized away" needs to be run? Not a very robust solution, so not a part of a standard developer toolkit.
If you're trying to extract relevant code out of a spaghetti codebase, you could try running a profiler (jvisualvm) to see what code is being run. This would require that you get the software to the state that it's only running the code you're interested in, but so would your initial idea.
Story: in my java code i have a few ScheduledFuture's that i need to run everyday on specific time (15:00 for example), the only available thing that i have is database, my current application and openshift with multiple pods. I can't move this code out of my application and must run it from there.
Problem: ScheduledFuture works on every pod, but i need to run it only once a day. I have a few ideas, but i don't know how to implement them.
Idea #1:
Set environment variable to specific pod, then i will be able to check if this variable exists (and its value), read it and run schedule task if required. I know that i have a risk of hovered pods, but that's better not to run scheduled task at all than to run it multiple times.
Idea #2:
Determine a leader pod somehow, this seems to be a bad idea in my case since it always have "split-brain" problem.
Idea #3 (a bit offtopic):
Create my own synchronization algorithm thru database. To be fair, it's the simplest way to me since i'm a programmer and not SRE. I understand that this is not the best one tho.
Idea #4 (a bit offtopic):
Just use quartz schedule library. I personally don't really like that and would prefer one of the first two ideas (if i will able to implement them), but at the moment it seems like my only valid choice.
UPD. May be you have some other suggestions or a warning that i shouldn't ever do that?
I would suggest to use a ready-to-use solution. Getting those things right, especially covering all possible corner-cases wrt. reliability, is hard. If you do not want to use quartz, I would at least suggest to use a database-backed solution. Postgres, for example, has SELECT ... FOR UPDATE SKIP LOCKED; (scroll down to the section "The Locking Clause") which may be used to implement one-time only scheduling.
You can create cron job using openshift
https://docs.openshift.com/container-platform/4.7/nodes/jobs/nodes-nodes-jobs.html
and have this job trigger some endpoint in you application that will invoke your logic.
Generally unused/dead code is bad but I wonder what to do with unused components.
Imagine that I have application that sends notifications to users, it sends EmailNotification but after some time we switch to sending notifications with SMS. Instead of deleting EmailNotification class i create interface let's say Notification and I have such structure:
Notification
--SmsNotification
--EmailNotification
I don't want to remove EmailNotification, because after some time we can go back to EmailNotifications and this change will be as easy as mark EmailNotification class as #Primary.
In such case one of the implementations is always dead code and I wonder if it is ok or generally how to deal with that?
Actually this is not the best practice.
Instead of this practice, you can separate your code into two different modules, one per component. By this way you can utilize any of two modules depending on your needs through your build automation tool (maven or gradle for example). So the produced jars will contain no dead code.
I would agree that this is not dead code, just unused code. However the code in production should be as clean as possible and so if using version control such as git, I would remove the code as it will always be there in the history of the git repository. If you do not want to do this, then I would suggest a way of explaining why the code is there, some thing like a java doc or readme file.
There should not be any problem in keeping the old code, which might become reusable in future. As a matter of fact, the design itself should be so that it can accommodate changes in components without severe impacts.
But if there is an unreachable block of code, which certainly will not add any value to the product in present or future, it will be better removed, because it will unnecessarily increase the number of lines of code and will slow down the process of testing, ultimately impacting the delivery. Additionally, this unused code block will also appear in the final product (the JAR/WAR) unwantedly increasing its size.
In my case, I was using SonarQube for static code analysis and there were blocks of code, methods and sometimes files which will show up only at the time of testing. It was slowing down the process as well as taking otherwise unnecessary heap space. Getting rid of those blocks certainly helped us speed up the process.
One thing you should be aware of is that even unused components need to be maintained. Some examples that come to my mind:
If the Notification interface changes, EmailNotification has to be changed too
If you update dependencies used by multiple components, you by might need change EmailNotification too
If you change or introduce new quality measures (e.g. x% of code coverage, specific code styles, no warnings policy etc.), they also apply to unused components - which leads to additional work
The changes required to maintain unused components could be obvious (because it does not compile any more) or subtly (they still compile but since they are not used, no one notices that they fail at runtime). Even if compile errors get fixed, chances are that they are not getting tested properly.
So by keeping unused modules you might have to do more work than necessary for certain changes and you still run the risk of having a broken module that you can't just turn on when needed. It could easier to just retire the component and revive and update it when it is actually needed. You could wait with the retirement until there actually is a breaking change though. If you are lucky, no breaking change comes before the component is needed again.
If you are certain that you'll need the component again in near future, then keep it. But make sure to maintain it properly.
As said in the documentation we can continue build if one of the tasks has failed. But I can't get the point of that feature... Why do we need to execute other task if one of the tasks has been failed? Is it safe at all? Couldn't you provide an example?
Yep it makes sense, for example generating classes from wsdl, in case of service not available.
Then you should provide some logic in your application for not working this service.
The second sentence in your linked doc says:
This allows the build to complete sooner, but hides other failures that would have occurred. In order to discover as many failures as possible in a single build execution, you can use the --continue option.
So instead of failing on the first error just go and fail on all. Imagine a webform that only tells you each error at a time after submitting and it takes you ages to fill it out versus a form that shows you all you current of your current errors at once.
Examples are obviously include developing the original gradle file and testing it with your build. Or running on an integration service, where you would rather have "all" the errors at once instead of hitting the build button all the day.
If a task fails, any subsequent tasks that were depending on it will not be executed, as it is not safe to do so.
So you will most likely not end up with the result you are expecting. But could! That is on you to decide, as it depends on your build and what you are doing. So, 'it safe? Heck no. But sometimes we all have to do unsafe things...
If you want to get rid of something failing, that is not vital to the actual build result (e.g. the jar file you are after) but is part of the build process (e.g. a codenarc task as part of the tests) and you would rather fix a critical bug with ugly code, you might be better off to just exclude that task (gradle jar -x codenarc) instead of using this feature. Is this safe? Heck no... you get the picture!