I am using apache Nifi , which uses logback framework for logging.
I have written a custom log appender to push logs to MongoDB.
Now I have a requirement to push logs in specific format (json) to ElasticSearch.
As few other apps are already using open telemetry to push data to ES, i have been asked to do so.
So now I am looking for right way to push data from my custom log appender (java class) to Open Telemetry.
Could some please point me to the usefull example which I can refer ?
Mongo appender looks like below, same message I want to push Open Telemetry also.
#Override
public void start() {
super.start();
logger.info("Initialising MongoDBLogAppender ~~~~~~~~~~~~~~~~");
try{
createConnection();
}catch (Exception e){
logger.error("Failed to obtain a database instance from the MongoClient at server [{}] and port [{}].", this.server, this.port);
}
}
private MongoClient createConnection () throws Exception {
if (client == null)
{
client = createMongoClient(this.server, this.port, this.databaseName, this.userName, this.password);
}
if (this.databaseName != null && (!"".equals(this.databaseName))) {
database = client.getDatabase(this.databaseName);
} else {
logger.error("Mongo database name is required.");
}
return client;
}
#Override
protected void append(ILoggingEvent iLoggingEvent) {
if (database == null)
return;
String logMessage = iLoggingEvent.getMessage() == null ? "" : iLoggingEvent.getMessage();
logger.info("LogMessage (inside append method) : "+logMessage);//Change the level to debug after testing
String[] msgParts = parseLogMessage(logMessage);
Document doc = new Document()
.append("Timestamp", new Date(iLoggingEvent.getTimeStamp()))
.append("Ip", this.hostIp)
.append("Server", "NiFi")
.append("Instance", "")
.append("Url", "")
.append("TTId", msgParts[1])
.append("LTId", msgParts[0])
.append("LUId", msgParts[2])
.append("SId", msgParts[4])
.append("RId", msgParts[3])
.append("Level", iLoggingEvent.getLevel().levelStr)
.append("Logger", iLoggingEvent.getLoggerName())
.append("Thread", iLoggingEvent.getThreadName())
.append("Message", msgParts[5])
.append("Exception", iLoggingEvent.getThrowableProxy() != null ? ThrowableProxyUtil.asString(iLoggingEvent.getThrowableProxy()) : null);
try
{
Publisher<InsertOneResult> publisher = database.getCollection(collectionName).insertOne(doc);
publisher.subscribe(new Subscriber<InsertOneResult>() {
#Override
public void onSubscribe(final Subscription s) {
s.request(1); // <--- Data requested and the insertion will now occur
}
#Override
public void onNext(final InsertOneResult result) {}
#Override
public void onError(final Throwable t) {
logger.error("Failed to insert Nifi log to mongodb : "+this.toString(),t);
}
#Override
public void onComplete() {}
});
}
catch (Exception e)
{
logger.error("Encountered exception while logging Nifi log to MongoDB : "+this.toString(), e);
}
}
/* Log message format expected : "~(<LoggedinTenantId>, <TargetTenantId>, <Userid>) ~ <RequestId> ~ <SessionId> ~ <DetailedMessage>" */
public static String[] parseLogMessage(String logMessage){
String[] msgParts = new String[6];
..................................
return msgParts;
}
private synchronized static MongoClient createMongoClient(String server, int port, String databaseName, String userName, String password)
{
...................................
return MongoClients.create(settings);
}
}
Thanks
Mahendra
Related
I want to configure separate log directory for every request. Is this possible with helidon?
The oracle helidon JUL examples are located in github. Building off of those examples you would have to create a custom Handler to read the request from the HelidonMdc:
import io.helidon.logging.jul;
import io.helidon.logging.common;
import java.util.logging.*;
public class RequestFileHandler extends Handler {
public RequestFileHandler() {
super.setFormatter(new HelidonFormatter());
}
#Override
public synchronized void publish(LogRecord r) {
if (isLoggable(r)) {
try {
FileHandler h = new FileHandler(fileName(r), Integer.MAX_VALUE, 1, true);
try {
h.setLevel(getLevel());
h.setEncoding(getEncoding());
h.setFilter(null);
h.setFormatter(getFormatter());
h.setErrorManager(getErrorManager());
h.publish(r);
} finally {
h.close();
}
} catch (IOException | SecurityException jm) {
this.reportError(null, jm, ErrorManager.WRITE_FAILURE);
}
}
}
#Override
public void flush() {
}
#Override
public void close() {
super.setLevel(Level.OFF);
}
private String fileName(LogRecord r) {
Optional<String> o = HelidonMdc.get("name");
return o.isPresent() ? o.get() +".log" : "unknown.log";
}}
Like the example code this code is assuming that you have set the value of 'name' to the request id. You would then have to install this handler on your application logger.
I need to migrate to Kinesis library to version 2.2.11 so I followed the tutorial: https://docs.aws.amazon.com/streams/latest/dev/kcl-migration.html
I need to run multiple instances of my consumer app, so every one of them needs to have an unique application name in order to have a separate lease table in DynamoDb.
When initializing the consumer Kinesis runs DynamoDBLeaseRefresher.createLeaseTableIfNotExists which checks if a new table needs to be created for this application name and creates one if it cannot be found.
So 2 operations are performed:
DescribeTable - it returns the table info or throws a ResourceNotFoundExecption,
if needed - CreateTable.
The problem for me is with the DescribeTable method. When I am looking for an existing table it returns it with no problem. But when I am looking for a non-existent table it throws the ResourceNotFoundExecption -> so far so good. Unfortunately it then gets wrapped and is now:
java.util.concurrent.CompletionException: software.amazon.awssdk.core.exception.SdkClientException: Unable to execute HTTP request: software.amazon.awssdk.awscore.exception.AwsServiceException$Builder.extendedRequestId(Ljava/lang/String;)Lsoftware/amazon/awssdk/awscore/exception/AwsServiceException$Builder;
and the app expecting ResourceNotFoundException gets something different instead and crashes.
The wrapped exception message is a bit misleading: "Unable to execute HTTP request" since the request was performed and returned the proper message: "Resource not found".
Funny thing is that it sometimes works, the exception does not get wrapped, the CreateTable operation is performed and the consumer starts properly.
I have made a workaround for it for now where I just create the table before the initialization of the LeaseCoordinator, so it always gets the existing table.
here is my code:
public KinesisStreamReaderService(String streamName, String applicationName, String regionName) {
KinesisAsyncClient kinesisClient = KinesisAsyncClient.builder()
.credentialsProvider(EnvironmentVariableCredentialsProvider.create())
.region(Region.of(connectionProperties.getRegion()))
.httpClientBuilder(createHttpClientBuilder())
.build();
DynamoDbAsyncClient dynamoClient = DynamoDbAsyncClient.builder().region(Region.of(regionName)).build();
CloudWatchAsyncClient cloudWatchClient = CloudWatchAsyncClient.builder().region(Region.of(regionName)).build();
// if(!dynamoDbTableExists(dynamoClient, applicationName)) {
// createDynamoDbTable(dynamoClient, applicationName);
// }
ConfigsBuilder configsBuilder = new ConfigsBuilder(streamName, applicationName, kinesisClient,
dynamoClient, cloudWatchClient, workerId(), KinesisReaderProcessor::new);
configsBuilder.retrievalConfig().initialPositionInStreamExtended(
InitialPositionInStreamExtended.newInitialPosition(
InitialPositionInStream.LATEST));
scheduler = new Scheduler(
configsBuilder.checkpointConfig(),
configsBuilder.coordinatorConfig(),
configsBuilder.leaseManagementConfig(),
configsBuilder.lifecycleConfig(),
configsBuilder.metricsConfig(),
configsBuilder.processorConfig(),
configsBuilder.retrievalConfig().retrievalSpecificConfig(new PollingConfig(streamName, kinesisClient))
);
}
private void createDynamoDbTable(DynamoDbAsyncClient dynamoClient, String applicationName) {
log.info("Creating new lease table: {}", applicationName);
CompletableFuture<CreateTableResponse> createTableFuture = dynamoClient
.createTable(CreateTableRequest.builder()
.provisionedThroughput(ProvisionedThroughput.builder().readCapacityUnits(10L).writeCapacityUnits(10L).build())
.tableName(applicationName)
.keySchema(KeySchemaElement.builder().attributeName("leaseKey").keyType(KeyType.HASH).build())
.attributeDefinitions(AttributeDefinition.builder().attributeName("leaseKey").attributeType(
ScalarAttributeType.S).build())
.build());
try {
CreateTableResponse createTableResponse = createTableFuture.get();
log.debug("Created new lease table: {}", createTableResponse.tableDescription().tableName());
} catch (InterruptedException | ExecutionException e) {
throw new DataStreamException(e.getMessage(), e);
}
}
private boolean dynamoDbTableExists(DynamoDbAsyncClient dynamoClient, String tableName) {
CompletableFuture<DescribeTableResponse> describeTableResponseCompletableFutureNew = dynamoClient
.describeTable(DescribeTableRequest.builder()
.tableName(tableName).build());
try {
DescribeTableResponse describeTableResponseNew = describeTableResponseCompletableFutureNew
.get();
return nonNull(describeTableResponseNew);
} catch (InterruptedException | ExecutionException e) {
log.info(e.getMessage(), e);
}
return false;
}
private static String workerId() {
String workerId;
try {
workerId = format("%s_%s", getLocalHost().getCanonicalHostName(), randomUUID().toString());
} catch (UnknownHostException e) {
workerId = randomUUID().toString();
}
return workerId;
}
#Override
public void read(Consumer<String> consumer) {
this.consumer = consumer;
scheduler.run();
}
private class KinesisReaderProcessor implements ShardRecordProcessor {
private String shardId;
#Override
public void initialize(InitializationInput initializationInput) {
this.shardId = initializationInput.shardId();
log.info("Initializing record processor for shard: {}", shardId);
}
#Override
public void processRecords(ProcessRecordsInput processRecordsInput) {
log.debug("Checking shard {} for new records", shardId);
List<KinesisClientRecord> records = processRecordsInput.records();
if (!records.isEmpty()) {
log.debug("Processing {} records from kinesis stream shard {}", records.size(), shardId);
records.forEach(record -> {
String json = UTF_8.decode(record.data()).toString();
log.info(json);
consumer.accept(json);
});
}
}
#Override
public void leaseLost(LeaseLostInput leaseLostInput) {
log.info("Record processor has lost lease, terminating");
}
#Override
public void shardEnded(ShardEndedInput shardEndedInput) {
try {
shardEndedInput.checkpointer().checkpoint();
} catch (ShutdownException | InvalidStateException e) {
log.error(e.getMessage(), e);
}
}
#Override
public void shutdownRequested(ShutdownRequestedInput shutdownRequestedInput) {
try {
shutdownRequestedInput.checkpointer().checkpoint();
} catch (ShutdownException | InvalidStateException e) {
log.error(e.getMessage(), e);
}
}
}
}
Am I missing some configuration for the scheduler or something? Why is it sometimes working?
Thanks
Edit:
The problem is this block of code in DynamoDBLeaseRefresher.tableStatus() is invoked to check if the table exists:
DescribeTableResponse result;
try {
try {
result =
(DescribeTableResponse)FutureUtils.resolveOrCancelFuture(this.dynamoDBClient.describeTable(request), this.dynamoDbRequestTimeout);
} catch (ExecutionException var5) {
throw exceptionManager.apply(var5.getCause());
} catch (InterruptedException var6) {
throw new DependencyException(var6);
}
} catch (ResourceNotFoundException var7) {
log.debug("Got ResourceNotFoundException for table {} in leaseTableExists, returning false.", this.table);
return null;
}
and in my case it should get ResourceNotFoundException if the table is not found, but as I said the expection gets wrapped to CompletionException before it reaches the appropriate catch block and is caught in the code here:
catch (ExecutionException var5) {
throw exceptionManager.apply(var5.getCause());
This is happening 20 times in the loop while trying to Initialize the LeaseCoordinator and then just stops trying to initialize the connection. (As mentioned above it works occasionally, but that makes it even stranger to me)
With my workaround it only needs 1 try to get initialized
You don't need to create a lease table manually - DynamoDBLeaseCoordinator will create one if not exists on initialization and wait until it exists:
#Override
public void initialize() throws ProvisionedThroughputException, DependencyException, IllegalStateException {
final boolean newTableCreated =
leaseRefresher.createLeaseTableIfNotExists(initialLeaseTableReadCapacity, initialLeaseTableWriteCapacity);
if (newTableCreated) {
log.info("Created new lease table for coordinator with initial read capacity of {} and write capacity of {}.",
initialLeaseTableReadCapacity, initialLeaseTableWriteCapacity);
}
// Need to wait for table in active state.
final long secondsBetweenPolls = 10L;
final long timeoutSeconds = 600L;
final boolean isTableActive = leaseRefresher.waitUntilLeaseTableExists(secondsBetweenPolls, timeoutSeconds);
if (!isTableActive) {
throw new DependencyException(new IllegalStateException("Creating table timeout"));
}
}
The issue in your case, I think, is that it's eventually created and you probably should periodically check until table appears - like DynamoDBLeaseCoordinator#initialize() does.
My app is built upon Spring + SockJs. Main page represents a table of available connections so that user can monitor them in real-time. Every single url monitor can be suspended/resumed separatelly from each other. The problem is once you suspend some monitor then you can never resume it back because ApplicationEvents property of MonitoringFacade bean suddenly becomes null for the SINGLE entity. For other entites listener keeps working pretty well. When attempt to invoke methods of such null listener NullPointerException is never thrown though.
class IndexController implements ApplicationEvents
...
public IndexController(SimpMessagingTemplate simpMessagingTemplate, MonitoringFacade monitoringFacade) {
this.simpMessagingTemplate = simpMessagingTemplate;
this.monitoringFacade = monitoringFacade;
}
#PostConstruct
public void initialize() {
if (logger.isDebugEnabled()) {
logger.debug(">>Index controller initialization.");
}
monitoringFacade.addDispatcher(this);
}
...
#Override
public void monitorUpdated(String monitorId) {
if (logger.isDebugEnabled()) {
logger.debug(">>Sending monitoring data to client with monitor id " + monitorId);
}
try {
ConfigurationDTO config = monitoringFacade.findConfig(monitorId);
Report report = monitoringFacade.findReport(monitorId);
ReportReadModel readModel = ReportReadModel.mapFrom(config, report);
simpMessagingTemplate.convertAndSend("/client/update", readModel);
} catch (Exception e) {
logger.log(Level.ERROR, "Exception: ", e);
}
}
public class MonitoringFacadeImpl implements MonitoringFacade
...
private ApplicationEvents dispatcher;
public void addDispatcher(ApplicationEvents dispatcher) {
logger.info("Setting up dispatcher");
this.dispatcher = dispatcher;
}
...
#Override
public void refreshed(RefreshEvent event) {
final String monitorId = event.getId().getIdentity();
if (logger.isDebugEnabled()) {
logger.debug(String.format(">>Refreshing monitoring data with monitor id '%s'", monitorId));
}
Configuration refreshedConfig = configurationService.find(monitorId);
reportingService.compileReport(refreshedConfig, event.getData());
if (logger.isDebugEnabled()) {
logger.debug(String.format(">>Notifying monitoring data updated with monitor id '%s'", monitorId) + dispatcher);
}
dispatcher.monitorUpdated(monitorId); // here dispatcher has null value... or it's actually not
}
void refreshed(RefreshEvent event) method succesfully receives updates from Quartz scheduler through the interface and sends it back to controller.
The question is how a singleton-scoped bean can have different property values for different objects it is applied for and why such a property becomes null even though i have never set it to null?
UPD:
#MessageMapping("/monitor/{monitorId}/suspend")
public void handleSuspend(#DestinationVariable String monitorId) {
if (logger.isDebugEnabled()) {
logger.debug(">>>Handling suspend request for monitor with id " + monitorId);
}
try {
monitoringFacade.disableUrlMonitoring(monitorId);
monitorUpdated(monitorId);// force client update
} catch (Exception e) {
logger.log(Level.ERROR, "Exception: ", e);
}
}
#MessageMapping("/monitor/{monitorId}/resume")
public void handleResume(#DestinationVariable String monitorId) {
if (logger.isDebugEnabled()) {
logger.debug(">>>Handling resume request for monitor with id " + monitorId);
}
try {
monitoringFacade.enableUrlMonitoring(monitorId);
monitorUpdated(monitorId);// force client update
} catch (Exception e) {
logger.log(Level.ERROR, "Exception: ", e);
}
}
I have the following scheduled piece of code in my Spring Boot Application:
#Scheduled(fixedDelay = DELAY_SECONDS)
private void processJobQueue() {
BlockingQueue<ReportDeliverable> jobQueue = JobQueueFactory.getJobQueueInstance();
while (!jobQueue.isEmpty()) {
//do stuff
if (rCount == 0) {
status = send(reportDeliverable);
if (status == TransferStatus.FAILURE) {
populateQueue(reportDeliverable);
}
if (status == TransferStatus.SUCCESS) { //write the metadata to database
int i = dbMetadataWriter.writeMetadata(reportDeliverable);
}
} else if (rCount == -1) {
populateQueue(reportDeliverable);
} else
logger.info("Record exists in MetaData for {}. Ignoring the File transfer....", reportDeliverable.getFilePath());
}
}
In my DBMetadataWriter component, the writeMetadataWriter() looks something like this:
#Component
public class DBMetadataWriter {
public int writeMetadata(final ReportDeliverable reportDeliverable) {
int nbInserted = 0;
try {
nbInserted = jdbcTemplate.update(PORTAL_METADATA_INSERT, insertDataValues);
} catch (Exception e) {
logger.error("Could not insert metadata for {}, Exception: {} ", reportDeliverable.toString(), e.getMessage());
}
return nbInserted;
}
In some cases, when writing the insert to the database, I get table space issues with the database at which point I think it would be wise for me to shut down the spring boot application until table space problems are resolved.
What would be the correct way to handle these rare cases? What technique can I use to gracefully shutdown the spring boot application and how can I do it in the above code?
My entry point class where I initially validate all my database connections before processing etc has the following...
#Component
public class RegisterReportSchedules implements ApplicationListener<ContextRefreshedEvent> {
#Autowired
private ApplicationContext applicationContext;
#Override
public void onApplicationEvent(ContextRefreshedEvent contextRefreshedEvent) {
}
private void shutdownApplication() {
int exitCode = SpringApplication.exit(applicationContext, (ExitCodeGenerator) () -> 0);
System.exit(exitCode);
}
}
You have exit() method on SpringApplication class, which can be used for exiting Spring boot application gracefully.
It requires 2 paramerter:
ApplicationContext
ExitCodeGenerator
For further reading:
https://docs.spring.io/spring-boot/docs/current/api/org/springframework/boot/SpringApplication.html#exit-org.springframework.context.ApplicationContext-org.springframework.boot.ExitCodeGenerator...-
Code Example:
#Autowired
public void shutDown(ExecutorServiceExitCodeGenerator exitCodeGenerator) {
SpringApplication.exit(applicationContext, exitCodeGenerator);
}
Call this method when you get exception for No Table space
I have written a RESTful API using Apache Jersey. I am using MongoDB as my backend. I used Morphia (v.1.3.4) to map and persist POJO to database. I tried to follow "1 application 1 connection" in my API as recommended everywhere but I am not sure I am successful. I run my API in Tomcat 8. I also ran Mongostat to see the details and connection. At start, Mongostat showed 1 connection to MongoDB server. I tested my API using Postman and it was working fine. I then created a load test in SoapUI where I simulated 100 users per second. I saw the update in Mongostat. I saw there were 103 connections. Here is the gif which shows this behaviour.
I am not sure why there are so many connections. The interesting fact is that number of mongo connection are directly proportional to number of users I create on SoapUI. Why is that? I found other similar questions but I think I have implemented there suggestions.
Mongo connection leak with morphia
Spring data mongodb not closing mongodb connections
My code looks like this.
DatabaseConnection.java
// Some imports
public class DatabaseConnection {
private static volatile MongoClient instance;
private static String cloudhost="localhost";
private DatabaseConnection() { }
public synchronized static MongoClient getMongoClient() {
if (instance == null ) {
synchronized (DatabaseConnection.class) {
if (instance == null) {
ServerAddress addr = new ServerAddress(cloudhost, 27017);
List<MongoCredential> credentialsList = new ArrayList<MongoCredential>();
MongoCredential credentia = MongoCredential.createCredential(
"test", "test", "test".toCharArray());
credentialsList.add(credentia);
instance = new MongoClient(addr, credentialsList);
}
}
}
return instance;
}
}
PourService.java
#Secured
#Path("pours")
public class PourService {
final static Logger logger = Logger.getLogger(Pour.class);
private static final int POUR_SIZE = 30;
#POST
#Consumes(MediaType.APPLICATION_JSON)
#Produces(MediaType.APPLICATION_JSON)
public Response createPour(String request)
{
WebApiResponse response = new WebApiResponse();
Gson gson = new GsonBuilder().setDateFormat("dd/MM/yyyy HH:mm:ss").create();
String message = "Pour was not created.";
HashMap<String, Object> data = null;
try
{
Pour pour = gson.fromJson(request, Pour.class);
// Storing the pour to
PourRepository pourRepository = new PourRepository();
String id = pourRepository.createPour(pour);
data = new HashMap<String, Object>();
if ("" != id && null != id)
{
data.put("id", id);
message = "Pour was created successfully.";
logger.debug(message);
return response.build(true, message, data, 200);
}
logger.debug(message);
return response.build(false, message, data, 500);
}
catch (Exception e)
{
message = "Error while creating Pour.";
logger.error(message, e);
return response.build(false, message, new Object(),500);
}
}
PourDao.java
public class PourDao extends BasicDAO<Pour, String>{
public PourDao(Class<Pour> entityClass, Datastore ds) {
super(entityClass, ds);
}
}
PourRepository.java
public class PourRepository {
private PourDao pourDao;
final static Logger logger = Logger.getLogger(PourRepository.class);
public PourRepository ()
{
try
{
MongoClient mongoClient = DatabaseConnection.getMongoClient();
Datastore ds = new Morphia().map(Pour.class)
.createDatastore(mongoClient, "tilt45");
pourDao = new PourDao(Pour.class,ds);
}
catch (Exception e)
{
logger.error("Error while creating PourDao", e);
}
}
public String createPour (Pour pour)
{
try
{
return pourDao.save(pour).getId().toString();
}
catch (Exception e)
{
logger.error("Error while creating Pour.", e);
return null;
}
}
}
When I work with Mongo+Morphia I get better results using a Factory pattern for the Datastore and not for the MongoClient, for instance, check the following class:
public DatastoreFactory(String dbHost, int dbPort, String dbName) {
final Morphia morphia = new Morphia();
MongoClientOptions.Builder options = MongoClientOptions.builder().socketKeepAlive(true);
morphia.getMapper().getOptions().setStoreEmpties(true);
final Datastore store = morphia.createDatastore(new MongoClient(new ServerAddress(dbHost, dbPort), options.build()), dbName);
store.ensureIndexes();
this.datastore = store;
}
With that approach, everytime you need a datastore you can use the one provided by the factory. Of course, this can implemented better if you use a framework/library that support factory pattern (e.g.: HK2 with org.glassfish.hk2.api.Factory), and also singleton binding.
Besides, you can check the documentation of MongoClientOptions's builder method, perhaps you can find a better connection control there.