Log4J SMTP digest/aggregate emails? - java

I have a JBOSS batch application that sometimes sends hundreds on emails in a minute to the same email address with Log4J errors. This causes problems with Gmail, because it says we are sending emails too quickly for that gmail account.
So I was wondering if there was a way to basically create a "digest" or "aggregate" email puts all the error logs in 1 email and sends that every 5 minutes. So that way every 5 minutes we may get a large email, but at least we actually get the email instead of it being delayed for hours and hours by gmail servers rejecting it.
I read this post that suggested something about using an evaluator to do that, but I couldn't see how that is configured in the Log4J xml configuration file. It also seemed like it might not be able to "digest" all the logs into 1 email anyway.
Has anyone done this before? Or know if it's possible?

From (the archived) SMTPAppender Usage page:
set this property
log4j.appender.myMail.evaluatorClass = com.mydomain.example.MyEvaluator
Now you have to create the evaluator class and implement the org.apache.log4j.spi.TriggeringEventEvaluator interface and place this class in a path where log4j can access it.
//Example TriggeringEventEvaluator impl
package com.mydomain.example;
import org.apache.log4j.spi.LoggingEvent;
import org.apache.log4j.spi.TriggeringEventEvaluator;
public class MyEvaluator implements TriggeringEventEvaluator {
public boolean isTriggeringEvent(LoggingEvent event) {
return true;
}
}
You have to write the evaluator logic within this method.

I created a free useable solution for log4j2 with an ExtendedSmtpAppender.
(If you still use log4j 1.x, simply replace your log4j-1.x.jar with log4j-1.2-api-2.x.jar - and log4j-core-2.x.jar + log4j-api-2.x.jar of course.)
You get it from Maven Central as de.it-tw:log4j2-extras (This requires Java 7+ and log4j 2.8+).
If you are restricted to Java 6 (and thus log4j 2.3) then use de.it-tw:log4j2-Java6-extras
Additionally, see the GitLab project: https://gitlab.com/thiesw/log4j2-extras (or https://gitlab.com/thiesw/log4j2-Java6-extras)
[OLD text:
If you use log4j2, see answer to other stack overflow issue: https://stackoverflow.com/a/34072704/5074004
Or directly go to my external but publically available solution presented in https://issues.apache.org/jira/browse/LOG4J2-1192
]

Related

Unknown Move Destination: STORESCP

I have just installed dcm4chee4-4.4.0.Beta1, following INSTALL.md instructions and everything works fine except movescu test.
When I run this test I can see an error in standalone/log/server.log (previously I launched in another console storescp -b11115). This is the error:
2015-09-13 12:48:49,105 INFO [org.dcm4che3.net.Association] (pool-6-thread-7) DCM4CHEE<-MOVESCU(7): processing 1:C-MOVE-RQ[pcid=1, prior=0
cuid=1.2.840.10008.5.1.4.1.2.2.2 - Study Root Query/Retrieve Information Model - MOVE
tsuid=1.2.840.10008.1.2 - Implicit VR Little Endian failed. Caused by: org.dcm4che3.net.service.DicomServiceException: Unknown Move Destination: STORESCP#localhost:11115
at org.dcm4chee.archive.retrieve.scp.CMoveSCP.calculateMatches(CMoveSCP.java:184) [dcm4chee-arc-retrieve-scp-4.4.0.Beta1.jar:]
I think this is because of configuration, maybe I have to add STORESCP as acceptedAET or similar, but I can find info on how to do it. I search through ldap using Apache Directory Studio, but I didn't find anything.
Thanks in advance.
Using dcm4che3, it goes like this if you're implementing an SCP and need to define which other SCPs are allowed to C-STORE things to you.
// Usual calamity creating Connection, ApplicationEntity and Device
...
ApplicationEntity ae = new ApplicationEntity("MYAETITLE");
String[] acceptedAETs = { "STORESCP", "GEPACS" }; // etc...
ae.setAcceptedCallingAETitles(acceptedAETs);
I assume that your favourite SCP (STORESCP) may need to know where to find the SCP known by MYAETITLE; identified by IP address and port. Typically you connect to an SCP as a SCU, issuing a C-MOVE (in the scenario laid out here) instructing the SCP to do a C-STORE to the AET identified in the C-MOVE.
I'm a bit confused by your choice of AE Title in your question (STORESCP) because that indicates that you kind of mix up the two SCPs involved here; the one receiving the C-MOVE (which should not be called STORESCP :) and the one implementing the C-STORE behaviour. The answer I gave above, is aimed at the SCP implementing the C-STORE behaviour.

ADAL 4 Android not passing client secret

I'll first say that I'm sure it is just me since people have probably got this to work out of the box without having to edit the ADAL 4 Android Library without editing the source.
When running the sample program and authenticating with a token I get an error from AZURE that it is not passing the client_secret in the message body. I can confirm that this is in fact the case - it is not passing the client_secret.
Although if I edit the OAuth2.java file and change the method buildTokenRequestMessage to something like the following the workflow works perfectly
public String buildTokenRequestMessage(String code) throws UnsupportedEncodingException {
String message = String.format("%s=%s&%s=%s&%s=%s&%s=%s&%s=%s",
AuthenticationConstants.OAuth2.GRANT_TYPE,
StringExtensions.URLFormEncode(AuthenticationConstants.OAuth2.AUTHORIZATION_CODE),
AuthenticationConstants.OAuth2.CODE, StringExtensions.URLFormEncode(code),
AuthenticationConstants.OAuth2.CLIENT_ID,
StringExtensions.URLFormEncode(mRequest.getClientId()),
AuthenticationConstants.OAuth2.REDIRECT_URI,
StringExtensions.URLFormEncode(mRequest.getRedirectUri())
// these are the two lines I've added to make it work
AuthenticationConstants.OAuth2.CLIENT_SECRET,
StringExtensions.URLFormEncode("<MY CLIENT SECRET>")
);
return message;
}
Am I doing something wrong? If not, what is the correct way to access the client secret?
My implementation is straight from the demo application with only changes being setting up the strings to match my endpoints.
Thanks
You need to register your app as a Native application at Azure AD portal. You don't need client secret for native app.

Spring IAMP Mail receiver not reading all emails everytime [duplicate]

I have a piece of code which uses spring integration's IMAP adapter to poll an inbox to read all incoming emails which are unread and that works perfectly. But if I open any email message and and then mark it as "unread" in my outlook inbox the poller doesn't fetch the marked email.
I can use the pop3 adapter which fetches all the email, but deletes them afterwords, but I want to keep the emails in my inbox and I want the poller to fetch all the email which are unseen.
Any suggestions to handle this problem? I been searching and reading articles on email adapters but didn't find anything useful.
Thanks in advance.
Looks like you need custom 'search-term-strategy'. From Spring Integration (SI) documentation:
By default, the ImapMailReceiver will search for Messages based on the default SearchTerm which is All mails that are RECENT (if supported), that are NOT ANSWERED, that are NOT DELETED, that are NOT SEEN and have not been processed by this mail receiver (enabled by the use of the custom USER flag or simply NOT FLAGGED if not supported). Since version 2.2, the SearchTerm used by the ImapMailReceiver is fully configurable via the SearchTermStrategy which you can inject via the search-term-strategy attribute. SearchTermStrategy is a simple strategy interface with a single method that allows you to create an instance of the SearchTerm that will be used by the ImapMailReceiver.
And here is a post from SI forum with funtastic Oleg's explanation: Server does not support RECENT or USER flags
And here you can find SI DefaultSearchTermStrategy: it's a place to determine how you should implement your own strategy. I guess, you case is:
This email server does not support RECENT flag, but it does support USER flags which will be used to prevent duplicates during email fetch.
Switch SI-mail logging level to DEBUG and take a look, which flag supports your email server.

In java, how can I get an Amazon EC2 Instance to see its own tags?

So I have a java program running within an Amazon EC2 instance. Is there a way to programatically get its own tags? I have tried instantiating a new AmazonEC2Client to us the describeTags() function but it only gives me null. Any help would be appreciated thank you.
Edit: To make things clearer, the instances are going to be unmanned worker machines spun up to solely do some computations
This should help you get started...
String instanceId = EC2MetadataUtils.getInstanceId();
AmazonEC2 client = AmazonEC2ClientBuilder.standard()
.withCredentials(new DefaultAWSCredentialsProviderChain())
.build();
DescribeTagsRequest req = new DescribeTagsRequest()
.withFilters(new Filter("resource-id", Collections.singletonList(instanceId)));
DescribeTagsResult describeTagsResult = client.describeTags(req);
List<TagDescription> tags = describeTagsResult.getTags()
You should be able to get the current instance id by sending a request to: http://169.254.169.254/latest/meta-data/instance-id. This only works within ec2. With this you can access quite a bit of information about the instance. However, tags do not appear to be included.
You should be able to take the instance id along with the correct authentication to get the instance tags. If you are going to run this on an instance, you may want to provide an IAM user with limited access instead of a user which has access to everything in case the instance is compromised.
While using user-data may be the simplest solution, the OP was asking specifically about the tagging, and unfortunately amazon hasn't made this as easy as it could be. However, It can be done. You want to use a combination of 2 amazon services.
First you need to retrieve the Instance ID. This can be achieved by hitting the URL from within your instance:
http://169.254.169.254/latest/meta-data/instance-id
Once you have the resource ID, you'll want to use Amazon's EC2 API to access the tags. Since you said you're using Java, I would suggest the Using the AWS SDK amazon makes available. Within this SDK you'll find a method called describeTags (documentation). You can use a Resource ID as one of the filters to get the specific tags to your instance. Supported filters are
tag key
resource-id
resource-type
I suggest doing this retrieval at boot using something like cloud-init and caching the tags on your server for use later if necessary.

Documentum DFS: Timeout for service calls

I'm working with the DFS Java API and was wondering whether anyone knows a simple way to configure a client-side timeout for service-calls that can be configured on the service context, for example?
I have experienced some rare occasions where a Documentum repository was not responding, that's why I am considering a general timeout for all DFS calls.
For testing a hanging service call, I created a dummy TBO implementation that simply blocks the thread for 10 minutes when updating the document:
#Override
public void saveEx(boolean keepLock, String versionLabels) throws DfException {
if (isNew() == false) {
try {
Thread.sleep(1000*60*10);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
super.saveEx(keepLock, versionLabels);
}
I'm not sure if this behaves exactly like a hanging service call, but at least in my tests it worked as expected - my invocations of the update method of the Object Service took about 10minutes.
Is there any configuration I have not yet found, or maybe a runtime-property to pass to the service context to configure the timeout?
I would prefer using existing features of DFS for this instead of implementing my own mechanism.
Have you tried editing the value in dfs-runtime.properties? I don't think the timeout can be context-specific, but you should be able to change it for the client as a whole.
Reposted from https://community.emc.com/message/3249#3249
"Please see the Server runtime startup settings section of the Deployment guide.
The following list describes the precedence that dfs-runtime.properties files take depending on their location:
local-dfs‑runtime.properties file in the local classpath
runtime properties file specified with ‑Ddfs.runtime.properties.file
dfs‑runtime.properties packaged with emc‑dfs‑rt.jar
For example, settings in the local-dfs‑runtime.properties file on the local classpath will take precedence of identical settings in the dfs‑runtime.properties file that is located in emc‑dfs‑rt.jar or the one specified with the ‑D parameter. The DFS application must be restarted after any changes to the configuration. As a best practice, use the provided configuration file that is deployed in the emc‑dfs‑rt.jar file for your base settings and use an external file to override settings that you specifically wish to change."

Categories

Resources