I want to group exception with Sentry, the exception comes from different servers, but I want all exception by type together, for example, all NPE be grouped. I know you can extend EventBuilderHelper and this is how sentry group things, but sentry java doesn't provide features to send an event with fingerprints of the method, error type, etc, as others SDK like this example in docs.sentry.io
function makeRequest(method, path, options) {
return fetch(method, path, options).catch(err => {
Sentry.withScope(scope => {
// group errors together based on their request and response
scope.setFingerprint([method, path, err.statusCode]);
Sentry.captureException(err);
});
});
}
this is what I try to do, but in this scope, don't have knowledge about method, error, etc.
package com.test;
import io.sentry.SentryClient;
import io.sentry.event.EventBuilder;
import io.sentry.event.helper.ContextBuilderHelper;
public class FingerprintEventBuilderHelper extends ContextBuilderHelper {
private static final String EXCEPTION_TYPE = "exception_type";
public FingerprintEventBuilderHelper(SentryClient sentryClient) {
super(sentryClient);
}
#Override
public void helpBuildingEvent(EventBuilder eventBuilder) {
super.helpBuildingEvent(eventBuilder);
//Get the exception type
String exceptionType =
if (exceptionType != null) {
eventBuilder.withTag(EXCEPTION_TYPE, exceptionType);
}
//Get method information and params
if (paramX != null) {
eventBuilder.withTag("PARAM", paramX);
}
}
}
the json send to the server has some information about the exception, but I dont know hoy to get it
...
"release": null,
"dist": null,
"platform": "java",
"culprit": "com.sun.ejb.containers.BaseContainer in checkExceptionClientTx",
"message": "Task execution failed",
"datetime": "2019-06-26T14:13:29.000000Z",
"time_spent": null,
"tags": [
["logger", "com.test.TestService"],
["server_name", "localhost"],
["level", "error"]
],
"errors": [],
"extra": {
"Sentry-Threadname": "MainThread",
"rid": "5ff37e943-f4b4-4hc9-870b-4f8c4d18cf84"
},
"fingerprint": ["{{ default }}"],
"key_id": 3,
"metadata": {
"type": "NullPointerException",
"value": ""
},
...
You can get the type of exception that was raised, but I have my doubts about getting the parameters related to functions in the trace
EventBuilderHelper myEventBuilderHelper = new EventBuilderHelper() {
public void helpBuildingEvent(EventBuilder eventBuilder) {
eventBuilder.withMessage("Overwritten by myEventBuilderHelper!");
Map<String, SentryInterface> ifs = eventBuilder.getEvent().getSentryInterfaces();
if (ifs.containsKey("sentry.interfaces.Exception"))
{
ExceptionInterface exI = (ExceptionInterface) ifs.get("sentry.interfaces.Exception");
for (SentryException ex: exI.getExceptions()){
String exceptionType = ex.getExceptionClassName();
}
}
}
};
If you look at the sendException method of the client, it initiates the ExceptionInterface with the actual Exception
public void sendException(Throwable throwable) {
EventBuilder eventBuilder = (new EventBuilder()).withMessage(throwable.getMessage()).withLevel(Level.ERROR).withSentryInterface(new ExceptionInterface(throwable));
this.sendEvent(eventBuilder);
}
And the constructor for the same is like
public ExceptionInterface(Throwable throwable) {
this(SentryException.extractExceptionQueue(throwable));
}
public ExceptionInterface(Deque<SentryException> exceptions) {
this.exceptions = exceptions;
}
So each exception get converted to a SentryException, but the original exception is not stored. So if you need params also, you will need to throw a custom exception with those parameter and also override the sendException method, not a straightforward way
Related
I want to get the status code of my command executed in a container with Fabric8 Java Kubernetes client.
Here is the script located in my container:
echo Bye Bye
exit 1
When I run the script with CLI or NodeJS Client I am able to get the output status code
Here is an example taken from fabric8 repository:
package org.package;
import io.fabric8.kubernetes.client.DefaultKubernetesClient;
import io.fabric8.kubernetes.client.KubernetesClient;
import io.fabric8.kubernetes.client.dsl.ExecListener;
import io.fabric8.kubernetes.client.dsl.ExecWatch;
import okhttp3.Response;
public class OtherMain {
public static void main(String[] args) throws InterruptedException {
String podName = "my-pod";
String namespace = "my-namespace";
try (
KubernetesClient client = new DefaultKubernetesClient();
ExecWatch watch = newExecWatch(client, namespace, podName)) {
Thread.sleep(10 * 1000L);
}
}
private static ExecWatch newExecWatch(KubernetesClient client, String namespace, String podName) {
return client.pods()
.inNamespace(namespace)
.withName(podName)
.readingInput(System.in)
.writingOutput(System.out)
.writingError(System.err)
.withTTY()
.usingListener(new SimpleListener())
.exec("sh", "test.sh");
}
private static class SimpleListener implements ExecListener {
#Override
public void onOpen(Response response) {
System.out.println("The shell will remain open for 10 seconds.");
}
#Override
public void onFailure(Throwable t, Response response) {
System.err.println("shell barfed");
}
#Override
public void onClose(int code, String reason) {
System.out.println("The shell will now close.");
}
}
}
However when looking at the output it seems that everything went ok. Is there a way to get the output status code ?
It is possible by using the wri method:
Can use the .writingErrorChannel and parse the response:
{
"metadata": {},
"status": "Failure",
"message": "command terminated with non-zero exit code: exit status 1",
"reason": "NonZeroExitCode",
"details": {
"causes": [
{
"reason": "ExitCode",
"message": "1"
}
]
}
}
Type is: io.fabric8.kubernetes.api.model.Status
Since 6.x versions of Fabric8 Kubernetes Client, a new exitCode() method has been added to ExecWatch interface which returns a CompletableFuture<Integer>, so now you should be able to get exit code like this:
ExecWatch execWatch = client.pods().inNamespace("default").withName("example-pod")
.writingOutput(System.out)
.writingError(System.out)
.exec("/bin/ping", "goo");
// ...
int exitCode = execWatch.exitCode().get();
I have a test ElasticSearch 6.0 index populated with millions of records, likely to be in the billions in production. I need to search for a subset of these records, then save this subset of the original set into a secondary index for later searching. I have proven this out via querying ES on Kibana, the challenge is to find appropriate APIs in Java 8 using my Jest client (searchbox.io, version 5.3.3) to do the same. The ElasticSearch cluster is on AWS, so using a transport client is out.
POST _reindex?slices=10&wait_for_completion=false
{ "conflicts": "proceed",
"source":{
"index": "my_source_idx",
"size": 5000,
"query": { "bool": {
"filter": { "bool" : { "must" : [
{ "nested": { "path": "test", "query": { "bool": { "must":[
{ "terms" : { "test.RowKey": ["abc"]} },
{ "range" : { "test.dates" : { "lte": "2018-01-01", "gte": "2010-08-01"} } },
{ "range" : { "test.DatesCount" : { "gte": 2} } },
{ "script" : { "script" : { "id": "my_painless_script",
"params" : {"min_occurs" : 1, "dateField": "test.dates", "RowKey": ["abc"], "fromDate": "2010-08-01", "toDate": "2018-01-01"}}}}
]}}}}
]}}
}}
},
"dest": {
"index": "my_dest_idx"
},
"script": {
"source": <My painless script>
} }
I am aware I can perform a search on the source index, then create and bulk load the response records onto the new index, but I want to be able to do this all in one shot, as I do have a painless script to glean off some information that is pertinent to the queries that will search the secondary index. Performance is a concern, as the application will be chaining subsequent queries together using the destination index to query against. Does anyone know how to do accomplish this using Jest?
It appears as if this particular functionality is not yet supported in Jest. The Jest API It has a way to pass in a script (not a query) as a parameter, but I even was having problems with that.
EDIT:
After some hacking with a coworker, we found a way around this...
Step 1) Extend the GenericResultAbstractionAction class with edits to the script:
public class GenericResultReindexActionHack extends GenericResultAbstractAction {
GenericResultReindexActionHack(GenericResultReindexActionHack.Builder builder) {
super(builder);
Map<String, Object> payload = new HashMap<>();
payload.put("source", builder.source);
payload.put("dest", builder.dest);
if (builder.conflicts != null) {
payload.put("conflicts", builder.conflicts);
}
if (builder.size != null) {
payload.put("size", builder.size);
}
if (builder.script != null) {
Script script = (Script) builder.script;
// Note the script parameter needs to be formatted differently to conform to the ES _reindex API:
payload.put("script", new Gson().toJson(ImmutableMap.of("id", script.getIdOrCode(), "params", script.getParams())));
}
this.payload = ImmutableMap.copyOf(payload);
setURI(buildURI());
}
#Override
protected String buildURI() {
return super.buildURI() + "/_reindex";
}
#Override
public String getRestMethodName() {
return "POST";
}
#Override
public String getData(Gson gson) {
if (payload == null) {
return null;
} else if (payload instanceof String) {
return (String) payload;
} else {
// We need to remove the incorrect formatting for the query, dest, and script fields:
// TODO: Need to consider spaces in the JSON
return gson.toJson(payload).replaceAll("\\\\n", "")
.replace("\\", "")
.replace("query\":\"", "query\":")
.replace("\"},\"dest\"", "},\"dest\"")
.replaceAll("\"script\":\"","\"script\":")
.replaceAll("\"}","}")
.replaceAll("},\"script\"","\"},\"script\"");
}
}
public static class Builder extends GenericResultAbstractAction.Builder<GenericResultReindexActionHack , GenericResultReindexActionHack.Builder> {
private Object source;
private Object dest;
private String conflicts;
private Long size;
private Object script;
public Builder(Object source, Object dest) {
this.source = source;
this.dest = dest;
}
public GenericResultReindexActionHack.Builder conflicts(String conflicts) {
this.conflicts = conflicts;
return this;
}
public GenericResultReindexActionHack.Builder size(Long size) {
this.size = size;
return this;
}
public GenericResultReindexActionHack.Builder script(Object script) {
this.script = script;
return this;
}
public GenericResultReindexActionHack.Builder waitForCompletion(boolean waitForCompletion) {
return setParameter("wait_for_completion", waitForCompletion);
}
public GenericResultReindexActionHack.Builder waitForActiveShards(int waitForActiveShards) {
return setParameter("wait_for_active_shards", waitForActiveShards);
}
public GenericResultReindexActionHack.Builder timeout(long timeout) {
return setParameter("timeout", timeout);
}
public GenericResultReindexActionHack.Builder requestsPerSecond(double requestsPerSecond) {
return setParameter("requests_per_second", requestsPerSecond);
}
public GenericResultReindexActionHack build() {
return new GenericResultReindexActionHack(this);
}
}
}
Step 2) Use of this class with a query then requires you to pass in the query as part of the source, then remove the '\n' characters:
ImmutableMap<String, Object> sourceMap = ImmutableMap.of("index", sourceIndex, "query", qb.toString().replaceAll("\\\\n", ""));
ImmutableMap<String, Object> destMap = ImmutableMap.of("index", destIndex);
GenericResultReindexActionHack reindex = new GenericResultReindexActionHack.Builder(sourceMap, destMap)
.waitForCompletion(false)
.conflicts("proceed")
.size(5000L)
.script(reindexScript)
.setParameter("slices", 10)
.build();
JestResult result = handleResult(reindex);
String task = result.getJsonString();
return (task);
Note the reindexScript parameter is of type org.elasticsearch.script.
This is a messy, hack-y way of getting around the limitations of Jest, but it seems to work. I understand that by doing it this way there may be some limitations to what may be acceptable in the input formatting...
I'm trying to make my android app download images from AWS S3. However, the following exception keeps coming up:
com.amazonaws.AmazonServiceException: Request ARN is invalid (Service: AWSSecurityTokenService; Status Code: 400; Error Code: ValidationError; Request ID: 3481bd5f-1db2-11e5-8442-cb6f713243b6)
at com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:710)
at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:385)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:196)
at com.amazonaws.services.securitytoken.AWSSecurityTokenServiceClient.invoke(AWSSecurityTokenServiceClient.java:875)
at com.amazonaws.services.securitytoken.AWSSecurityTokenServiceClient.assumeRoleWithWebIdentity(AWSSecurityTokenServiceClient.java:496)
at com.amazonaws.auth.CognitoCredentialsProvider.populateCredentialsWithSts(CognitoCredentialsProvider.java:671)
at com.amazonaws.auth.CognitoCredentialsProvider.startSession(CognitoCredentialsProvider.java:555)
at com.amazonaws.auth.CognitoCredentialsProvider.refresh(CognitoCredentialsProvider.java:503)
at com.application.app.utils.helper.S3Utils.getCredProvider(S3Utils.java:35)
at com.application.app.utils.helper.S3Utils.getS3Client(S3Utils.java:45)
at com.application.app.integration.volley.CustomImageRequest.parseNetworkError(CustomImageRequest.java:73)
at com.android.volley.NetworkDispatcher.parseAndDeliverNetworkError(NetworkDispatcher.java:144)
at com.android.volley.NetworkDispatcher.run(NetworkDispatcher.java:135)
I have a bucket and an identity pool. Also, created required roles.
My Cognito_APPUnauth_Role has the following INLINE POLICY:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1435504517000",
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::mybucket/*"
]
}
]
}
I have a java class named S3Utils that has some helper methods.
public class S3Utils {
private static AmazonS3Client sS3Client;
private static CognitoCachingCredentialsProvider sCredProvider;
public static CognitoCachingCredentialsProvider getCredProvider(Context context){
if (sCredProvider == null) {
sCredProvider = new CognitoCachingCredentialsProvider(
context,
Definitions.AWS_ACCOUNT_ID,
Definitions.COGNITO_POOL_ID,
Definitions.COGNITO_ROLE_UNAUTH,
null,
Regions.US_EAST_1
);
}
sCredProvider.refresh();
return sCredProvider;
}
public static String getPrefix(Context context) {
return getCredProvider(context).getIdentityId() + "/";
}
public static AmazonS3Client getS3Client(Context context) {
if (sS3Client == null) {
sS3Client = new AmazonS3Client(getCredProvider(context));
}
return sS3Client;
}
public static String getFileName(String path) {
return path.substring(path.lastIndexOf("/") + 1);
}
public static boolean doesBucketExist() {
return sS3Client.doesBucketExist(Definitions.BUCKET_NAME.toLowerCase(Locale.US));
}
public static void createBucket() {
sS3Client.createBucket(Definitions.BUCKET_NAME.toLowerCase(Locale.US));
}
public static void deleteBucket() {
String name = Definitions.BUCKET_NAME.toLowerCase(Locale.US);
List<S3ObjectSummary> objData = sS3Client.listObjects(name).getObjectSummaries();
if (objData.size() > 0) {
DeleteObjectsRequest emptyBucket = new DeleteObjectsRequest(name);
List<DeleteObjectsRequest.KeyVersion> keyList = new ArrayList<DeleteObjectsRequest.KeyVersion>();
for (S3ObjectSummary summary : objData) {
keyList.add(new DeleteObjectsRequest.KeyVersion(summary.getKey()));
}
emptyBucket.withKeys(keyList);
sS3Client.deleteObjects(emptyBucket);
}
sS3Client.deleteBucket(name);
}
}
Part of the method where the exception occurs, in CustomImageRequest.java:
s3Client = S3Utils.getS3Client(context);
ObjectListing objects = s3Client.listObjects(new ListObjectsRequest().withBucketName(Definitions.BUCKET_NAME).withPrefix(this.urlToRetrieve));
List<S3ObjectSummary> objectSummaries = objects.getObjectSummaries();
//This isn't just an id, it is a full picture name in S3 bucket.
for (S3ObjectSummary summary : objectSummaries)
{
String key = summary.getKey();
if (!key.equals(this.urlToRetrieve)) continue;
S3ObjectInputStream content = s3Client.getObject(Definitions.BUCKET_NAME, key).getObjectContent();
try {
this.s3Image = IOUtils.toByteArray(content);
} catch (IOException e) {
}
return new Object();
}
What am I doing wrong that causes this exception to be thrown every time. Thanks in advance.
I'm guessing there might be an error in the role ARN you specified. A role ARN should look something like
arn:aws:cognito-identity:us-east-1:ACCOUNTNUMBER:identitypool/us-east-1:UUID
If it is misspelled, or part is left off you may get the error. You may also want to consider user the new CognitoCachingCredentialsProvider constructor.
sCredProvider = new CognitoCachingCredentialsProvider(
context,
Definitions.COGNITO_POOL_ID,
Regions.US_EAST_1
);
However note that you will have to make sure that you have specified your role ARN in the Cognito console, but it should help prevent this issue in the future.
Edited for clarity, formatting, and added that you need to modify your ARN in the console if using new constructor.
With Rhino 17R4, we can create properties in javascript using Object.defineProperty() method.
public class MyGlobalObject : org.mozilla.javascript.ScriptableObject
{
public static org.mozilla.javascript.Script ___compiledScript = null;
public MyGlobalObject()
{
org.mozilla.javascript.Context con = org.mozilla.javascript.Context.enter();
try
{
con.initStandardObjects(this);
string strScript = "Object.defineProperty(this,\r\n 'onload', \r\n{ set : function(val){this.set_onload(val);},\r\n get : function(){return this.get_onload();}, enumerable: true, configurable: true});";
this.defineFunctionProperties(new string[] { "set_onload", "get_onload" }, typeof(MyGlobalObject), org.mozilla.javascript.ScriptableObject.DONTENUM);
org.mozilla.javascript.Script sc = con.compileString(strScript, "", 1, null);
object result_onload = con.evaluateString(this, "this.onload == undefined;", "", 1, null); // make sure it is not defined.
Console.WriteLine("onload is undefined? : {0}", result_onload);
// Define Properties Now.
sc.exec(con, this);
con.evaluateString(this, "this.onload= function(){var t1 = 1;};", "", 1, null);
object onloadobjectXYZ = con.evaluateString(this, "this.onload;", "", 1, null); // get function now.
Console.WriteLine("Onload object : {0} is found", onloadobjectXYZ);
}
catch (Exception ex)
{
Console.WriteLine(ex.ToString());
}
org.mozilla.javascript.Context.exit();
}
private object __onloadFunction;
public object get_onload()
{
Console.WriteLine("get_onload() called!");
return this.__onloadFunction;
}
//[org.mozilla.javascript.annotations.JSSetter]
public void set_onload(object _val)
{
Console.WriteLine("set_onload() called!");
this.__onloadFunction = _val;
}
public override string getClassName()
{
return "Global";
}
}
How can I create FunctionObject which is identical to "onloadobjectXYZ" in pure rhino object operation (not by using script like'strScipt')? It seems that it may be able to create FunctionObject for setter and getter, but I could not find a good example. Does anyone know how to define properties?
Thank you in advance!
defineProperty with java Method setter / getter is slightly different from object.defineProprty()
this.defineProperty("onload", null, javaonloadGetMethod, javaonloadSetMethod, ScriptableObject.PERMANENT);
This works for me as a workaround.
I have the current set of of routes implemented (for example):
GET /api/:version/:entity my.controllers.~~~~~
GET /api/:version/:entity/:id my.controllers.~~~~~
POST /api/:version/:entity my.controllers.~~~~~
POST /api/:version/:entity/:id my.controllers.~~~~~
DELETE /api/:version/:entity my.controllers.~~~~~
POST /api/:version/search/:entity my.controllers.~~~~~
And they work beautifully. Now let's say I want to implement a "batch endpoint" for the same API. It should look something like this:
POST /api/:version/batch my.controllers.~~~~~
and the body should look like this:
[
{
"method": "POST",
"call": "/api/1/customer",
"body": {
"name": "antonio",
"email": "tonysmallhands#gmail.com"
}
},
{
"method": "POST",
"call": "/api/1/customer/2",
"body": {
"name": "mario"
}
},
{
"method": "GET",
"call": "/api/1/company"
},
{
"method": "DELETE",
"call": "/api/1/company/22"
}
]
To do that I would like to know how I can call the play framework router to pass those requests? I was planning to use something similar as what is advised for unit tests:
#Test
public void badRoute() {
Result result = play.test.Helpers.routeAndCall(fakeRequest(GET, "/xx/Kiki"));
assertThat(result).isNull();
}
by going into the source code of routeAndCall(), you find something like this:
/**
* Use the Router to determine the Action to call for this request and executes it.
* #deprecated
* #see #route instead
*/
#SuppressWarnings(value = "unchecked")
public static Result routeAndCall(FakeRequest fakeRequest) {
try {
return routeAndCall((Class<? extends play.core.Router.Routes>)FakeRequest.class.getClassLoader().loadClass("Routes"), fakeRequest);
} catch(RuntimeException e) {
throw e;
} catch(Throwable t) {
throw new RuntimeException(t);
}
}
/**
* Use the Router to determine the Action to call for this request and executes it.
* #deprecated
* #see #route instead
*/
public static Result routeAndCall(Class<? extends play.core.Router.Routes> router, FakeRequest fakeRequest) {
try {
play.core.Router.Routes routes = (play.core.Router.Routes)router.getClassLoader().loadClass(router.getName() + "$").getDeclaredField("MODULE$").get(null);
if(routes.routes().isDefinedAt(fakeRequest.getWrappedRequest())) {
return invokeHandler(routes.routes().apply(fakeRequest.getWrappedRequest()), fakeRequest);
} else {
return null;
}
} catch(RuntimeException e) {
throw e;
} catch(Throwable t) {
throw new RuntimeException(t);
}
}
So my question is: Is there a less "hacky" way to do this with Play (I am not against mixing Scala and Java to get to it) than to copy the above code? I would have also like to give the option of executing the batched calls in parallel or in sequence ... I guess instantiating only one Routes using the class loader would be problematic then?
You can use the following method call to route your fake requests:
Play.current.global.onRouteRequest. Please see this post for a full example: http://yefremov.net/blog/play-batch-api/
You can probably use WS API for that, but personally I'd just create a private methods for collecting data and use them from both - 'single' and also 'batch' actions - it will be faster for sure.