404 Instance Unavailable when running a Task Queue - java

I'm using Java SDK 1.7.5, HRD datastore with the following task queue setup:
<queue>
<name>surveyAssembly</name>
<rate>5/s</rate>
<bucket-size>20</bucket-size>
<max-concurrent-requests>10</max-concurrent-requests>
</queue>
I'm getting a HTTP 404 when triggering the task. No errors in the logs just failing silently.
It seems a similar issue to this one Tasks queue up, nothing happens on retry (no log) but no luck after purging the queue.
Any ideas on how to diagnose the cause?

I was also getting same error. After debug I found that I forget to deploy back-end version from eclipse. So You have to confirm that both backend and frontend has same updated code.
Try this code
//backends.xml
<backends>
<backend name="mailback">
</backend>
</backends>
// Queue code
Queue surveyAssemblyQueue = QueueFactory.getQueue("surveyAssembly");
surveyAssemblyQueue.add(withUrl("/taskloop")param("type", type).header("Host",BackendServiceFactory.getBackendService().getBackendAddress("mailback", 0)));
Note: Instant Id should be "0" because I have created only 1 backend instant.

Related

Apache Beam Dataflow job fails with "GetWork timed out, retrying"

I am able to run an Acache Beam job successfully using the DirectRunner, with the following arguments:
java -jar my-jar.jar --commonConfigFile=comJobConfig.yml
--configFile=relJobConfig.yml
--jobName=my-job
--stagingLocation=gs://my-bucket/staging/
--gcpTempLocation=gs://my-bucket/tmp/
--tempLocation=gs://my-bucket/tmp/
--runner=DirectRunner
--bucket=my-bucket
--project=my-project
--region=us-west1
--subnetwork=my-subnetwork
--serviceAccount=my-svc-account#my-project.iam.gserviceaccount.com
--usePublicIps=false
--workerMachineType=e2-standard-2
--maxNumWorkers=20 --numWorkers=2
--autoscalingAlgorithm=THROUGHPUT_BASED
However, while trying to run on Google Dataflow (simply changing --runner=DataflowRunner) I receive the following message (GetWork timed out, retrying) in the workers.
I have checked the logs generated by the Dataflow process and found
[2023-01-28 20:49:41,600] [main] INFO org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler:91 2023-01-28T20:49:39.386Z: Autoscaling: Raised the number of workers to 2 so that the pipeline can catch up with its backlog and keep up with its input rate.
[2023-01-28 20:50:26,911] [main] INFO org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler:91 2023-01-28T20:50:26.779Z: Workers have started successfully.
and I see no indication that the workers have failed. Moreover I do not see any relevant logs which indicate that the process is working (in my case, reading from the appropriate Pub/Sub topic for notifications). Let me know if there is any further documentation on this log, as I have not been able to find any.
Turns out I forgot to include the --enableStreamingEngine flag. This solved my problem.

After deploy Google App Engine returns HTTP response code 403

I've done another deploys and all was fine, but after finishing the app, I'm getting this error. And the page request keeps loading.
Do I need to configure something in "IAM"?
Java 11
Standard Environment
h2 DB
Spring boot
The stack trace from Google Cloud:
java.io.IOException: Server returned HTTP response code: 403 for URL: https://clouddebugger.googleapis.com/v2/controller/debuggees/register at java.base/sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1919) at java.base/sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1515) at java.base/sun.net.www.protocol.https.HttpsURLConnectionImpl.getInputStream(HttpsURLConnectionImpl.java:250)
at com.google.devtools.cdbg.debuglets.java.GcpHubClient.registerDebuggee (Unknown Source)
I've got new data using Stackdriver debug.
"message": "Stackdriver Debugger API has not been used in project
929024293238 before or it is disabled. Enable it by visiting
https://console.developers.google.com/apis/api/clouddebugger.googleapis.com/overview?project=929024293238
then retry. If you enabled this API recently, wait a few minutes for
the action to propagate to our systems and retry.",
Just a note if somebody else stumbles across this error: I had the same message in Google's App Engine dashboard popping up up to 60 times in a couple of minutes.
Enabling the Stackdriver Debugging API, as linked above, solved it. No more error logs (of that kind) were being produced. The weired thing is that the Stackdriver Debugging API should have been turned on by default for my standard environment.
The log wasn't very informative.
I change the application port from 8000 to 8080.
application.properties:
server.port=${PORT:8080}
Now the app is running fine.

Jmeter I keep getting: 'java.net.ConnectException: Connection timed out: connect'?

Jmeter I keep getting: 'java.net.ConnectException: Connection timed out: connect' ?
I have created a load test which tests a specific url at 200 users
when running the load test for x1 iteration i keep seem to be getting: Connection timed outs?
I have made the following changes listed here: https://msdn.microsoft.com/en-us/library/aa560610(v=bts.20).aspx
But the issue is still there:
[1
You most probably don't have access to the target host from where you test.
Did you configure proxy as your web browser is probably configured
http://jmeter.apache.org/usermanual/get-started.html#proxy_server
But if failure is partial, then you server might be overloaded and rejecting some requests.
My expectation is that "problematic" requests are simply not able to finish in 20 seconds (most probably you have modified Connect timeout and set this value in HTTP Request or HTTP Request Defaults)
20 seconds looks like a long response time to me so your finding indicates application under test performance problem.
Going forward if you would like to see more "human readable" message in the results file switch to Duration Assertion instead of setting timeouts on protocol level
See How to Use JMeter Assertions in Three Easy Steps article for more information on conditionally failing JMeter requests.
Please check client configuration from where you are running your tests. It might be like your client system is not able to handle 200 threads. Do the test iteration wise means try with 10, 50 , 70 and so on. Check from which iteration onwards you are getting the error. It is also advisable not to include the listeners during load testing.
Please check the best practices for load testing using jmeter.
http://jmeter.apache.org/usermanual/best-practices.html

How to keep track of pending builds in Jenkins?

I need to find what build id's were assigned to a pending build in Jenkins.
I saw that Build History keeps track of them and displays them to the user but I didn't find them displayed in an api form (xml in my case) at this link
http://localhost:8080/job/JobName/api/xml
How can I get this information from Jenkins ?
Use the queue API:
JENKINS_URL/queue/api/xml
You can get a detailed description of Jenkins job API passing a /api to the end of the job url.
Use JENKINS_URL/job/<your-job>/api . At the and of the provided help page, you can find this info:
if you start a job programmatically by an API request...
... successful queueing will result in 201 status code with Location HTTP header pointing the URL of the item in the queue. By polling the api/xml sub-URL of the queue item, you can track the status of the queued task.

I need some help with Sakai 2.7.1 and Tomcat 5.5.33, in regards to SQL issues

Today I managed to recreate the farms with Scalr.net and apparently after a few times restarting tomcat and fixing issues, I get this error once again. The thing is I was using MySQL with a clean install on the entire server, that includes Java 6.1_24, Tomcat 5.5.33, Sakai 2.7.1. The issue I keep running into is user denied when the fact that I have this user in the MySQL Instance, as well giving it complete remote access with sakai#% and even this is not working when it was working about an hour ago since this post was made.
... Continued from above log, everything before logs just fine
2011-03-31 18:31:14,120 WARN main org.springframework.jdbc.datasource.LazyConnectionDataSourceProxy - Could not retrieve default auto-commit and transaction isolation settings
org.apache.commons.dbcp.SQLNestedException: Error preloading the connection pool
... continued over 400+ lines...
Here is another error in regards to the access denied error...
2011-03-31 18:31:16,854 WARN main org.hibernate.cfg.SettingsFactory - Could not obtain connection metadata
java.sql.SQLException: Access denied for user 'sakai'#'ec2-50-17-184-70.compute-1.amazonaws.com' (using password: YES)
.... continued....
I now get this error whenever I startup, this is with a fresh install of tomcat/sakai
SEVERE: Unable to set localhost. This prevents creation of a GUID. Cause was: ec2-72-44-56-167.compute-1.amazonaws.com: ec2-72-44-56-167.compute-1.amazonaws.com
java.net.UnknownHostException: ec2-72-44-56-167.compute-1.amazonaws.com: ec2-72-44-56-167.compute-1.amazonaws.com
(This most recent error (Localhost) was simply fixed by restarting the amazon aws instance. Thankfully) Although I keep getting the same errors even with a fresh install... Almost as if the information is being refreshed from a cache... Or something
As with the last question you posted on this topic, the error message seems very clear: the user 'sakai'#... does not have access to login to the database you have set it up to. I recommend taking a look at the Mysql documentation to understand how to administer the user accounts to find out if you've missed a setting somewhere to allow this account to have access.
I believe I may have figured out how to fix this problem. It has nothing to do with mysql, or the apache server itself. It has to do with the failure of Scalr.net not Initializing the IP or something of that sort. After doing some research I found some issues with the HostInit issues such as....
Cannot deliver message 'HostInit' (message_id: af9dcfdb-a09e-4971-bdb7-7871b3f7e21c) via REST to server '50.17.135.98' (server_id: e49cfec9-5bcb-44d1-bbc5-fde32450fc89). Error: 0 Timeout was reached; connect() timed out! (http://50.17.135.98:8013/control)
Cannot deliver message 'BlockDeviceAttached' (message_id: a153d83f-3d96-4d53-920a-ccb80701675a) via REST to server '50.17.135.98' (server_id: e49cfec9-5bcb-44d1-bbc5-fde32450fc89). Error: 0 Timeout was reached; connect() timed out! (http://50.17.135.98:8013/control)
Cannot deliver message 'HostUp' (message_id: 1adde27c-9982-4551-b266-c3c432d1dd44) via REST to server '50.17.135.98' (server_id: e49cfec9-5bcb-44d1-bbc5-fde32450fc89). Error: 0 Timeout was reached; connect() timed out! (http://50.17.135.98:8013/control)
Cannot deliver message 'HostInit' (message_id: f1aa4b14-ef57-4361-ae56-87702d674b11) via REST to server '50.17.135.98' (server_id: e49cfec9-5bcb-44d1-bbc5-fde32450fc89). Error: 0 Timeout was reached; connect() timed out! (http://50.17.135.98:8013/control)
So what I did was I made a snapshot image of the apache server/mysql etc. and terminated them allowing the recreation of the instance and this managed to solve the problem in one manner.

Categories

Resources