I have deployed a Scala, play framework, application on Heroku.
And I have added the new relic add-on to my app.
I have followed the Java guide as Scala runs on the Java VM.
$ heroku addons:add newrelic:standard
-----> Adding newrelic:standard to ... done, v7 (free)
Unziped newrelic to newrelic in the application
$ git add newrelic
$ git commit -m 'add newrelic'
$ heroku config:add JAVA_OPTS='-Xmx384m -Xss512k -XX:+UseCompressedOops -javaagent:newrelic/newrelic.jar'
$ git push heroku master
Now to the problems. First when I accessed the add on I had to create a new account on new relic with new password and it wanted my credentials? Is this correct? Shouldn't my Heroku account suffice, I later think it started to work.? Strange process so now I believe I have two accounts. In Herokus page my account is standard hourly and in new relics it's standard lite.
I don't understand how to see my performance stats. I actually think that the new relic isn't set up correctly?
One absurd thing is the new relics homepage which says not sufficient permissions on everything except "tell a friend and earn bucks" not even support works WTF.
I have attached two screenshot with my credentials masked. can anyone comment if they look like they should or if the new relic has set up itself incorrectly?
You should be able to use New Relic through the heroku interface without creating a separate account.
Once you're app is deployed with the agent, and has gotten a few requests you should start seeing data in the interface.
The agent does create a log (I believe you can get output via heroku logs) so that might also help you troubleshoot it.
I'd suggest opening a support ticket on http://support.newrelic.com.
This could be happening because your hosted application does not have the right credentials (e.g license key) provided by newrelic.
Did you update the default newrelic.yml file obtained from the 'newrelic.jar' extract? You can obtain your app's license key in the account settings menu when accessing newrelic through the heroku interface (your 1st screenshot). Then set the following config vars on heroku;
NEW_RELIC_LICENSE_KEY="your license key"
NEW_RELIC_APP_NAME="your app name"
Don't forget to set the appropriate RACK_ENV config var too e.g RACK_ENV=production
Then update your newrelic.yml file by finding and changing the following lines;
license_key: '<%= license_key %>' to license_key: '<%= ENV["NEW_RELIC_LICENSE_KEY"] %>'
app_name: My Application to app_name: '<%= ENV["NEW_RELIC_APP_NAME"] %>'
app_name: My Application (Development) to app_name: '<%= ENV["NEW_RELIC_APP_NAME"] (Development) %>'
app_name: My Application (Staging) to app_name: '<%= ENV["NEW_RELIC_APP_NAME"] (Staging) %>'
Here is a sample newrelic.yml file with environment vars set.
You should be able to access new relic from heroku interface after you've pushed your changes.
Check whether you find anything in the heroku logs with heroku logs. Also, you can increase the log level of new relic by setting the system properties newrelic.config.log_level and newrelic.debug. Also note that after creation of a new account or after password changes it takes a while until the changed credentials are propagated.
To set a finer log log level:
$ heroku config:set JAVA_OPTS=”-Xmx384m -Xss512k -XX:+UseCompressedOops -Dfile.encoding=UTF-8 -javaagent:target/staged/newrelic-agent-2.20.0.jar -Dnewrelic.bootstrap_classpath=true -Dnewrelic.config.file=./conf/newrelic.yml newrelic.config.log_level=finer newrelic.debug=true”
Make sure to not run with that in production. It produces quite a lot of logs.
See our blog post about how to setup New Relic with Play 2.1/Scala on Heroku: http://techblog.nezasa.com/2013/08/performance-monitoring-of-nezasa-with.html
When I attempted to use New Relic on Heroku I was prompted to enter credit card information
$ heroku addons:add newrelic:standard
but I just exited out and the logging for my Rails application is working*. Note that depending on your New Relic settings you may just be logging locally (default for development mode does not log to the cloud but is accessible locally).
Sorry to pollute this thread with Ruby stuff, but you may find something similar with regards to Heroku and New Relic.
*update: I had the same issue again when deploying another application and realized that to use the Heroku New Relic add-on you have to provide credit card information, but if you just directly instrument your application you do not have to give credit card information. YOu do have to have created an account already, though.
Related
I am trying to connect JMSToolbox to an app that is driven by JMS queues running on OpenLiberty.
I am using Open liberty version 22. Specifically 22.0.0.11-202210101601
As far as I can tell, the correct documentation to follow is https://github.com/jmstoolbox/jmstoolbox/wiki/2.2-Setup-for-IBM-LibertyProfile
The installed features I have on the Open Liberty server from the documentation are as follows:
restConnector-2.0 (note restConnector-1.0 as specified in the
documentation does not seem to be available)
appSecurity-2.0
wasJmsClient-2.0
wasJmsServer-1.0
Note I was not able to install restConnector-1.0 from the documentation as I could only find restConnector-2.0.
For the extra jars, I was only able to find restConnector.jar
I could not find the other jars specified in the documentation:
com.ibm.ws.ejb.thinclient_x.y.z.jar (from <was_full_home>/runtimes)
com.ibm.ws.orb_x.y.z.jar (from <was_full_home>/runtimes)
com.ibm.ws.sib.client.thin.jms_x.y.z.jar (from
<was_full_home>/runtimes) (tested with x.z.y ==8.5.5.0+, 9.0.0.0)
Where do I get these jars from? I'm not sure what WAS Full Home means. Am I supposed to take them from a copy of WAS? Are these Jars proprietary?
Thanks,
John
"WAS full" refers to "traditional" WebSphere Application Server. You can download it following this page https://www.ibm.com/cloud/blog/websphere-trial-options-and-downloads
WAS full home is shorthand for WAS installation directory, typically /IBM/WebSphere/AppServer.
These jars are included in the /runtimes subdirectory after you installed the product.
So typical approach following above page would be:
download InstallationManager
install InstallationManager
install developers version either v9 (http://www.ibm.com/software/repositorymanager/com.ibm.websphere.ILAN.v90) or v8.5.5 (https://www.ibm.com/software/repositorymanager/com.ibm.websphere.DEVELOPERSILAN.v85)
copy required jars from the installation directory
... but that would take a while...
So alternatively you could (if you have docker), which should be much faster than whole mumbo-jumbo with installation:
pull WAS v8.5 or v9 version from here https://hub.docker.com/r/ibmcom/websphere-traditional
start container: docker run --name was-server -p 9043:9043 -p 9443:9443 -d ibmcom/websphere-traditional
locate required files:
$ cd opt/IBM/WebSphere/AppServer/runtimes/
$ ls -la
total 343540
com.ibm.jaxrs1.1.thinclient_9.0.jar
com.ibm.jaxrs2.0.thinclient_9.0.jar
com.ibm.jaxws.thinclient_9.0.jar
com.ibm.ws.admin.client.forJython21_9.0.jar
com.ibm.ws.admin.client_9.0.jar
com.ibm.ws.ejb.embeddableContainer_9.0.jar
com.ibm.ws.ejb.embeddableContainer_nls_9.0.jar
com.ibm.ws.ejb.portable_9.0.jar
com.ibm.ws.ejb.thinclient_9.0.jar
com.ibm.ws.jpa-2.0.thinclient_9.0.jar
com.ibm.ws.jpa-2.1.thinclient_9.0.jar
com.ibm.ws.messagingClient.jar
com.ibm.ws.orb_9.0.jar
com.ibm.ws.sib.client.thin.jms_9.0.jar
com.ibm.ws.sib.client_ExpeditorDRE_9.0.jar
com.ibm.ws.webservices.thinclient_9.0.jar
com.ibm.xml.thinclient_9.0.jar
endorsed
properties
sibc.jmsra.rar
sibc.nls.zip
copy required files from the container:
docker cp <containerID>:/opt/IBM/WebSphere/AppServer/runtimes/xyz.jar .
I have a java-application (standard springboot from default tutorial: https://spring.io/guides/gs/spring-boot-for-azure/ ) that I "successfuly" deploy to my WebApp (created during deployment) via Eclipse/maven plugin azure-webapp:deploy
Once deployed, files are inside the WebApp, I can see them. If start-up is successful I do get running application, but if it is not - I do not know how to troubleshoot. I don't know where to find error logs, what caused the problem and as consequence, how to solve it.
as an example of how to make it fail, add this line:
throw new RuntimeException("Doomed to fail");
I tried enabling logs from "diagnostic logs tab" and expected to see them under LogFiles/Applications but that folder remains empty.
How do I troubleshoot java-application that fails to start in WebApps of Azure?
edit: additional example of Exception to troubleshoot:
public static void main(String[] args) {
throw new RuntimeException("start failure #21");
//SpringApplication.run(Application.class, args);
}
It sounds like you followed the springboot tutorial Deploying a Spring Boot app to Azure to build the GitHub project microsoft/gs-spring-boot and deploy to Azure, but not works.
Here is my steps which I also followed the tutorial, but deployed via my own way.
I created a directory SpringBoot on my local machine, and to do the commands cd SpringBoot and git clone https://github.com/microsoft/gs-spring-boot.
Then, to build it via commands cd gs-spring-boot/complete and mvnw clean package
Note: I reviewed the sections of the tutorial under Create a sample Spring Boot web app which seems to do on Linux, but the web.config file in microsoft/gs-spring-boot/complete is ready to Azure WebApp for Windows. However, there is not any comments to describe the deployment target that be Azure WebApp for Windows or Linux.
So I used my existing WebApp for Windows to test my deployment. I open my Kudu console in browser via the url https://<webapp name>.scm.azurewebsites.net/DebugConsole and drag the files complete/web.config and complete/target/gs-spring-boot-0.1.0.jar to site/wwwroot as the figure below. Then, I started my webapp, and it works fine.
Note: Please check the JAVA_HOME environment variable which has been configured on Azure via command echo %JAVA_HOME% as the figure below.
If not, you need to set Java runtime in the Application settings tab of Azure portal.
Or you can also configure the web.config file to replace the reference %JAVA_HOME% with an existing Java runtime installed in the path D:\Program Files\Java of Azure WebApp, as below.
<?xml version="1.0" encoding="UTF-8"?>
<configuration>
<system.webServer>
<handlers>
<add name="httpPlatformHandler" path="*" verb="*" modules="httpPlatformHandler" resourceType="Unspecified"/>
</handlers>
<!-- <httpPlatform processPath="%JAVA_HOME%\bin\java.exe" -->
<httpPlatform processPath="D:\Program Files\Java\jre1.8.0_181\bin\java.exe"
arguments="-Djava.net.preferIPv4Stack=true -Dserver.port=%HTTP_PLATFORM_PORT% -jar "%HOME%\site\wwwroot\gs-spring-boot-0.1.0.jar"">
</httpPlatform>
</system.webServer>
</configuration>
I didn't manage to find logs in windows based machine, but If you enable logs on linux-based-machine you will see them in the "Diagnostic logs" output. There is a catch though.
There is 230 timeout. It will wait full "timeout" time, until producing logs in the log file, after which you can access them via log file or through "Diagnostic logs". Make sure to enable logging before you start application. This applies to linux based machine, I don't know if it can apply to windows based machine.
Then it waits for an answer trigger. Answer trigger, as it turns out, is a phrase in console output "Application is started in X seconds". I increased the timeout to 500 seconds, because although it starts in 60 seconds on my local machine, it takes 430 seconds on remote-linux-based machine of Microsoft Azure*
Second, I changed the name of my main class from "GameStart" to "Application" and after that it actually caught the trigger. After that the application started. Nowhere in manuals did I find the "until timeout - no logs" and "trigger phrase" clauses mentioned.
ps: For reference, it took me 20 to upload application, 5 minutes to start it. I was using centralUS server, cuz centralEU did not work out for me, as it was even longer, although I'm in central EU myself
-
*using test account. It might happen that on payed account number are either different or similar.
I'm working a couple of Kafka connectors and I don't see any errors in their creation/deployment in the console output, however I am not getting the result that I'm looking for (no results whatsoever for that matter, desired or otherwise). I made these connectors based on Kafka's example FileStream connectors, so my debug technique was based off the use of the SLF4J Logger that is used in the example. I've searched for the log messages that I thought would be produced in the console output, but to no avail. Am I looking in the wrong place for these messages? Or perhaps is there a better way of going about debugging these connectors?
Example uses of the SLF4J Logger that I referenced for my implementation:
Kafka FileStreamSinkTask
Kafka FileStreamSourceTask
I will try to reply to your question in a broad way. A simple way to do Connector development could be as follows:
Structure and build your connector source code by looking at one of the many Kafka Connectors available publicly (you'll find an extensive list available here: https://www.confluent.io/product/connectors/ )
Download the latest Confluent Open Source edition (>= 3.3.0) from https://www.confluent.io/download/
Make your connector package available to Kafka Connect in one of the following ways:
Store all your connector jar files (connector jar plus dependency jars excluding Connect API jars) to a location in your filesystem and enable plugin isolation by adding this location to the
plugin.path property in the Connect worker properties. For instance, if your connector jars are stored in /opt/connectors/my-first-connector, you will set plugin.path=/opt/connectors in your worker's properties (see below).
Store all your connector jar files in a folder under ${CONFLUENT_HOME}/share/java. For example: ${CONFLUENT_HOME}/share/java/kafka-connect-my-first-connector. (Needs to start with kafka-connect- prefix to be picked up by the startup scripts). $CONFLUENT_HOME is where you've installed Confluent Platform.
Optionally, increase your logging by changing the log level for Connect in ${CONFLUENT_HOME}/etc/kafka/connect-log4j.properties to DEBUG or even TRACE.
Use Confluent CLI to start all the services, including Kafka Connect. Details here: http://docs.confluent.io/current/connect/quickstart.html
Briefly: confluent start
Note: The Connect worker's properties file currently loaded by the CLI is ${CONFLUENT_HOME}/etc/schema-registry/connect-avro-distributed.properties. That's the file you should edit if you choose to enable classloading isolation but also if you need to change your Connect worker's properties.
Once you have Connect worker running, start your connector by running:
confluent load <connector_name> -d <connector_config.properties>
or
confluent load <connector_name> -d <connector_config.json>
The connector configuration can be either in java properties or JSON format.
Run
confluent log connect to open the Connect worker's log file, or navigate directly to where your logs and data are stored by running
cd "$( confluent current )"
Note: change where your logs and data are stored during a session of the Confluent CLI by setting the environment variable CONFLUENT_CURRENT appropriately. E.g. given that /opt/confluent exists and is where you want to store your data, run:
export CONFLUENT_CURRENT=/opt/confluent
confluent current
Finally, to interactively debug your connector a possible way is to apply the following before starting Connect with Confluent CLI :
confluent stop connect
export CONNECT_DEBUG=y; export DEBUG_SUSPEND_FLAG=y;
confluent start connect
and then connect with your debugger (for instance remotely to the Connect worker (default port: 5005). To stop running connect in debug mode, just run: unset CONNECT_DEBUG; unset DEBUG_SUSPEND_FLAG; when you are done.
I hope the above will make your connector development easier and ... more fun!
i love the accepted answer. one thing - the environment variables didn't work for me... i'm using confluent community edition 5.3.1...
here's what i did that worked...
i installed the confluent cli from here:
https://docs.confluent.io/current/cli/installing.html#tarball-installation
i ran confluent using the command confluent local start
i got the connect app details using the command ps -ef | grep connect
i copied the resulting command to an editor and added the arg (right after java):
-agentlib:jdwp=transport=dt_socket,server=y,suspend=y,address=5005
then i stopped connect using the command confluent local stop connect
then i ran the connect command with the arg
brief intermission ---
vs code development is led by erich gamma - of gang of four fame, who also wrote eclipse. vs code is becoming a first class java ide see https://en.wikipedia.org/wiki/Erich_Gamma
intermission over ---
next i launched vs code and opened the debezium oracle connector folder (cloned from here) https://github.com/debezium/debezium-incubator
then i chose Debug - Open Configurations
and entered the highlighted debugging configuration
and then run the debugger - it will hit your breakpoints !!
the connect command should look something like this:
/Library/Java/JavaVirtualMachines/jdk1.8.0_221.jdk/Contents/Home/bin/java -agentlib:jdwp=transport=dt_socket,server=y,suspend=y,address=5005 -Xms256M -Xmx2G -server -XX:+UseG1GC -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35 -XX:+ExplicitGCInvokesConcurrent -Djava.awt.headless=true -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Dkafka.logs.dir=/var/folders/yn/4k6t1qzn5kg3zwgbnf9qq_v40000gn/T/confluent.CYZjfRLm/connect/logs -Dlog4j.configuration=file:/Users/myuserid/confluent-5.3.1/bin/../etc/kafka/connect-log4j.properties -cp /Users/myuserid/confluent-5.3.1/share/java/kafka/*:/Users/myuserid/confluent-5.3.1/share/java/confluent-common/*:/Users/myuserid/confluent-5.3.1/share/java/kafka-serde-tools/*:/Users/myuserid/confluent-5.3.1/bin/../share/java/kafka/*:/Users/myuserid/confluent-5.3.1/bin/../support-metrics-client/build/dependant-libs-2.12.8/*:/Users/myuserid/confluent-5.3.1/bin/../support-metrics-client/build/libs/*:/usr/share/java/support-metrics-client/* org.apache.kafka.connect.cli.ConnectDistributed /var/folders/yn/4k6t1qzn5kg3zwgbnf9qq_v40000gn/T/confluent.CYZjfRLm/connect/connect.properties
Connector module is executed by the kafka connector framework. For debugging, we can use the standalone mode. we can configure IDE to use the ConnectStandalone main function as entry point.
create debug configure as the following. Need remember to tick "Include dependencies with "Provided" scope if it is maven project
connector properties file need specify the connector class name "connector.class" for debugging
worker properties file can copied from kafka folder /usr/local/etc/kafka/connect-standalone.properties
I want to start an ubuntu in google cloud. my problem is when I want to run a startup script like:
Metadata.Items item2 = new Metadata.Items();
item2.setKey("startup-script");
item2.setValue("my script....");
the problem is that my startup script never running. Has anybody idea how can I run startup script on custom image automatically?
I have preinstalled cloud-init on my image.
In the Google Developers' Console, go your Ubuntu instance settings.
Then add a new Custom Metadata key-value pair.
Key: startup-script
Value: your script or a URL of where to find it
Checked it works with own Ubuntu custom image. Try with something simple at the beginning like
echo `date` >> /tmp/startup-script
I have developed an java ear that i deploy from my local eclipse to my local websphere 8.5 by using the publish button in eclipse. When I try to deploy my ear from command line I get an error after I try to access the webpage.
I update my ear from common line like this:
${was.dir}/profiles/${was.profile}/bin/wsadmin.sh -lang jython -username ${was.username} -password ${was.password} -c AdminApplication.updateApplicationUsingDefaultMerge('${was.app.name}', '${build.dir}/${ear.name}')
The deployment is successful but when I access my application thru my web browser I get the following message instead of seeing my application:
Error 404: com.ibm.ws.webcontainer.servlet.exception.NoTargetForURIException: No target servlet configured for uri: /wwww/index.html
I have verified that the ear is ok by updating it thru the web admin interface of websphere without any configuration.
What am I doing wrong or what additional steps do I need to do to successfully update my ear?
You are using wrong command. You should use something like this:
AdminApp.update('MyAppEAR', 'app', '[ -operation update -contents
MyApp.ear -nopreCompileJSPs -installed.ear.destination C:\WAS\MyAppEAR
-nodistributeApp -useMetaDataFromBinary -nodeployejb -createMBeansForResources -noreloadEnabled -nodeployws -validateinstall warn -noprocessEmbeddedConfig -filepermission ..dll=755#..so=755#..a=755#..sl=755
-noallowDispatchRemoteInclude -noallowServiceRemoteInclude -asyncRequestDispatchType DISABLED -nouseAutoLink -noenableClientModule -clientMode isolated -novalidateSchema -MapModulesToServers [[ MyApp MyApp.war,WEB-INF/web.xml WebSphere:cell=Node02Cell,node=Node02,server=server1 ]]]' )
Also, in WebSphere Application Server you can log every command you issue through Administrative Console.
Steps
login to admin console with administrative user (e.g. wasadmin)
click "System Administration" -> "Console Preferences"
check "Enable command assistance notifications" and "Log command assistance commands"
click Apply button to save changes
You can see commands in help portlet on the upper-right side of admin console:
If you checked "Log command assistance commands", you can also see jython commands in a log file "<WAS_HOME>\profiles\<PROFILE_NAME>\logs\server1\commandAssistanceJythonCommands.log"
I'm not sure about updateApplicationUsingDefaultMerge vs update (the latter is what I use), but don't forget you also have to AdminConfig.save() after you're finished.