Is it possible to configure launch4j to enable remote debuggin of the resulting application depending on a command line parameter? I aware that you can achive this by having the launched application launch another java application but I would like to eliminate that overhead.
According to launch4j's documentation one can pass additional parameters to the application using a file called ApplicationName.l4j.ini.
So you could just create such a file besides your application and write the debug configuration to it (As described here):
-Xdebug -Xrunjdwp:server=y,transport=dt_socket,address=4000,suspend=n
Related
I'm trying to debug a Java application in Kubernetes using a Cloud Code plugin.
There is no trouble with the default debug.
I just click debug and it works, but... I don't know how to connect to application on the start.
I've tried to add option -agentlib:jdwp=transport=dt_socket,server=n,suspend=**y**,address=,quiet=y
but JVM crushed because Cloud Code adds its own option agentlib and JVM can't handle two options with the same name.
How can I edit the agentlib option for Cloud Code? (to add suspend=y) or maybe disable that option.
Or maybe there is another way to debug the application while it starts?
I've tried to add agentlib option to JDK_JAVA_OPTIONS, but scaffold(library inside cloud plugin) try to find agentlib in JAVA_TOOL_OPTIONS
I've put the option in the right place and it works well
Adding this as an answer to provide additional context.
Skaffold doesn't currently support this feature. There is an open feature request on Skaffold to add this ability.
Adding support for this has not been a high-priority item for Skaffold as suspending on startup often causes puzzling problem cascades as startup, readiness, and liveness probes time out, leading to pod restarts, and then debug sessions being terminated, and then new sessions established. And container startup problems are often more easily debugged in isolation (e.g., running as a normal Java app and emulating the Kubernetes startup through env vars, sending messages, etc).
All that said, Skaffold should respect existing -agentlib settings passed on the command-line or in JAVA_TOOL_OPTIONS. So as you found, you can pass along your own JDWP setting in your JAVA_TOOL_OPTIONS.
I'm using jetty-runner.jar version 9.4.28.v20200408. When I run java jar command, I got this output:
Usage: java [-Djetty.home=dir] -jar jetty-runner.jar [--help|--version] [ server opts] [[ context opts] context ...]
and the server opts include this entry
--out file - info/warn/debug log filename (with optional 'yyyy_mm_dd' wildcard
So I have used this expression
yyyy_mm_dd_${API_NAME}-${PORT}-http.log
The logging is working with yyyy_mm_dd but the old entries can't be deleted automatically. Is there a way I can control this?
Note: jetty-runner is deprecated and is being removed.
https://github.com/eclipse/jetty.project/commit/a1b38fadb836f768af6a2cb348d1687715381b25
jetty-runner is for quick testing of your webapp, and is not recommended for production use, purely because it's not setup for customization and configuration just like you are experiencing now.
It's essentially a hardcoded configuration, with a scant few knobs you can tweak.
Logging is hardcoded to use internal Jetty StdErrLog and Jetty RolloverFileOutputStream. Neither of which support what you are attempting to do.
If you move to pure embedded-jetty, where you control things, or use the proper ${jetty.home} and ${jetty.base} split, then you can specify any slf4j based logging implementation you want, along with all of the custom logging behaviors you need (rollover, compression, triggers for rollover, old rollover file behaviors, etc).
I'm trying to write an application that uses the AWS API from an Android app written in Java. It seems that the recommended way to do it is using a special set of libraries called "Amplify." I was able to import the appropriate Amplify Java classes into my code, but I see that not all the parameters I want to supply (such as the S3 bucket or the API access key) can be given as method arguments.
All the advice I see online suggests running a command-line configuration command using npm install aws-amplify. But I'd prefer not to use a command-line tool which asks me questions: I'd prefer to configure everything in code. And I don't want to install npm or mess around with it (full disclosure, I tried installing it and got some hassles).
Is there a way to supply the Amplify configuration without using the command-line tool, perhaps via a configuration file or some additional arguments to the methods I'm calling within Java?
I figured it out!
The Amplify.configure() has a not-well-documented overload where you can specify a config file in the form of an Android "resource."
So instead of using
Amplify.configure(getApplicationContext());
as directed in the tutorials, I use
Amplify.configure(
AmplifyConfiguration.fromConfigFile(getApplicationContext(), R.raw.amplifyconfiguration),
getApplicationContext());
The config file needs to be located in the app/src/main/res/raw/ path of the project, named amplifyconfiguration.json. The development environment automatically generates the definition of the value R.raw.amplifyconfiguration, which is a number identifying the file.
That solves the problem of loading the configuration from an explicit file, without using the amplify CLI. The next hurdle is figuring out what keys can be specified in the file...
I'm working on a web application based on Spring MVC and Hibernate on Tomcat 8 (both for deployment and local development). The IDE is Spring Tool Suite (which is based on Eclipse).
I want to open a REPL (read-eval-print-loop, like Groovy's, Python's, Ruby's, etc) in the context of my application (while it's running on Tomcat locally), to speed up development by shortening the code -> test feedback loop.
That is, I want to be able to open a shell in the command line (or inside Eclipse) and do something like:
ClientDAO clientDAO = getAutowiredDAOFromSpringSomehow();
Client client = clientDAO.findByID(100);
client.setName("Another name");
clientDAO.save(client);
I can work around this a bit by setting a breakpoint somewhere in a controller and use Eclipse's debugger Display tab to execute arbitrary code, but this is a bit impractical and uncomfortable.
I'm open to using Groovy or Scala's shell if it's more convenient (I obviously still need access to my objects, though).
What are my options?
You may be able to do it using Spring Shell, JShell or BeanShell
Here's a project to embed a repl in an android app using BeanShell
I don't know if it's useful for your use-case but theoretically it should be possible to do this using CRaSH. It's a Shell like Bash on Linux but for your Java-Application and it's possible to create your own commands.
I have a Java application built with OSGi that I want to run in different modes, say a remote & local data source. I would like to be able to build and deploy a single version so that I could run the app as a service in remote mode and then stop the service & try different things in local mode.
I am using declarative services.
Would there be a way to do this?
# app -remote
Starting app in remote mode
Disabling com.example.data.local.FileStoreDao
Enabling com.example.data.remote.MySqlDao
...
And conversely:
# app -local
Starting app in localmode
Disabling com.example.data.remote.MySqlDao
Enabling com.example.data.local.FileStoreDao
...
Or something similar.
To quote Richard Hall:
The configuration of your application == The set of installed bundles.
The best and most maintainable solution will be to install a (slightly) different set of bundles for each of your runtime "modes". So for example, most of the bundles would be the same but you deploy either the MySqlDao bundle or the FileStoreDao. Using a tool or launcher that allows to you easily setup and launch different combinations of bundles will be critical.
If you really want to do this without changing the set of bundles, you could package both MySqlDao and FileStoreDao into a single bundle and use DS to enable/disable one or the other based on config data coming from Config Admin.
Not sure what framework you're using, but in Equinox, you can pass a different config file with a command line swich:
http://www.eclipse.org/equinox/documents/quickstart-framework.php
You could have two config files, and have a wrapper (java or batch file?) around the OSGi bootstrapper to select the proper config file. I have done something like this, but in my case I ended up going with two distros with different plugins, as it was simpler and it was all that I needed. Hope this helps