I read in the documentation that AsynchronousSocketChannel is threadsafe, so it is safe for single instance of it to be shared by multiple threads, but when I try to implement this single instance concept (in client side application) I failed to use write() method to send data to server.
Previously I had success doing it by calling shutdownOutput() or close() from channel after callingwrite(byteBuffer,attachment,completionHandler). But when I just want to use only single instance without callingclose()orshutdownOutput()` the message never reaches the server (I saw it from server log).
Do we need to close channel in order to make message reach the server? I use Spring boot to build this project.
Here is my code:
#Component
public class AgentStatusService {
private static final Logger log =
LoggerFactory.getLogger(AgentStatusService.class);
#Autowired
private SocketAddress serverAddress;
#Autowired
private AsynchronousSocketChannel channel;
public void consumeMessage() throws IOException {
try {
log.info("trying to connect to {}", serverAddress.toString());
channel.connect(serverAddress, channel, new SocketConnectCompletionHandler());
log.info("success connect to {}", channel.getRemoteAddress());
} catch (final AlreadyConnectedException ex) {
final ByteBuffer writeBuffer = ByteBuffer.wrap("__POP ".getBytes());
final Map<String, Object> attachm`enter code here`ent = new HashMap<>();
attachment.put("buffer", writeBuffer);
attachment.put("channel", channel);
writeBuffer.flip();
channel.write(writeBuffer, attachment, new SocketWriteCompletionHandler());
} catch (final Exception e) {
log.error("an error occured with message : {}", e.getMessage());
e.printStackTrace();
}
}
This is my socket connect completion handler class:
public class SocketConnectCompletionHandler
implements CompletionHandler<Void, AsynchronousSocketChannel> {
private static Logger log =
LoggerFactory.getLogger(SocketConnectCompletionHandler.class);
#Override
public void completed(Void result, AsynchronousSocketChannel channel) {
try {
log.info("connection to {} established", channel.getRemoteAddress());
final ByteBuffer writeBuffer = ByteBuffer.wrap("__POP ".getBytes());
final Map<String, Object> attachment = new HashMap<>();
attachment.put("buffer", writeBuffer);
attachment.put("channel", channel);
writeBuffer.flip();
channel.write(writeBuffer, attachment, new
SocketWriteCompletionHandler());
} catch (final IOException e) {
e.printStackTrace();
}
}
#Override
public void failed(Throwable exc, AsynchronousSocketChannel attachment) {
exc.printStackTrace();
try {
log.error("connection to {} was failed", attachment.getRemoteAddress());
} catch (final Exception e) {
log.error("error occured with message : {}", e.getCause());
}
}
}
This is my socket write completion handler class:
public class SocketWriteCompletionHandler
implements CompletionHandler<Integer, Map<String, Object>> {
private static final Logger log =
LoggerFactory.getLogger(SocketWriteCompletionHandler.class);
#Override
public void completed(Integer result, Map<String, Object> attachment) {
try {
final AsynchronousSocketChannel channel =
(AsynchronousSocketChannel) attachment.get("channel");
final ByteBuffer buffer = (ByteBuffer) attachment.get("buffer");
log.info("write {} request to : {}", new String(buffer.array()),
channel.getRemoteAddress());
buffer.clear();
readResponse(channel, buffer);
} catch (final Exception ex) {
ex.printStackTrace();
log.error("an error occured with message : {}", ex.getMessage());
}
}
#Override
public void failed(Throwable exc, Map<String, Object> attachment) {
log.error("an error occured : {}", exc.getMessage());
}
public void readResponse(AsynchronousSocketChannel channel, ByteBuffer
writeBuffer) {
final ByteBuffer readBuffer = ByteBuffer.allocate(2 * 1024);
final Map<String, Object> attachment = new HashMap<>();
attachment.put("writeBuffer", writeBuffer);
attachment.put("readBuffer", readBuffer);
attachment.put("channel", channel);
readBuffer.flip();
channel.read(readBuffer, attachment, new
SocketReadCompletionHandler());
}
}
If the server thinks it didn't receive the message, and it did when you were previously shutting down or closing the socket, it must be trying to read to end of stream, and blocking or at least not completing its read and so never logging anything.
Why you are using multiple threads in conjunction with asynchronous I/O, or indeed with any socket, remains a mystery.
Related
I am trying to find a bug is some RabbitMQ client code that was developed six or seven years ago. The code was modified to allow for delayed messages. It seems that connections are created to the RabbitMQ server and then never destroyed. Each exists in a separate thread so I end up with 1000's of threads. I am sure the problem is very obvious / simple - but I am having trouble seeing it. I have been looking at the exchangeDeclare method (the commented out version is from the original code which seemed to work), but I have been unable to find the default values for autoDelete and durable which are being set in the modified code. The method below in within a Spring service class. Any help, advice, guidance and pointing out huge obvious errors appreciated!
private void send(String routingKey, String message) throws Exception {
String exchange = applicationConfiguration.getAMQPExchange();
Map<String, Object> args = new HashMap<String, Object>();
args.put("x-delayed-type", "fanout");
Map<String, Object> headers = new HashMap<String, Object>();
headers.put("x-delay", 10000); //delay in miliseconds i.e 10secs
AMQP.BasicProperties.Builder props = new AMQP.BasicProperties.Builder().headers(headers);
Connection connection = null;
Channel channel = null;
try {
connection = myConnection.getConnection();
}
catch(Exception e) {
log.error("AMQP send method Exception. Unable to get connection.");
e.printStackTrace();
return;
}
try {
if (connection != null) {
log.debug(" [CORE: AMQP] Sending message with key {} : {}",routingKey, message);
channel = connection.createChannel();
// channel.exchangeDeclare(exchange, exchangeType);
channel.exchangeDeclare(exchange, "x-delayed-message", true, false, args);
// channel.basicPublish(exchange, routingKey, null, message.getBytes());
channel.basicPublish(exchange, routingKey, props.build(), message.getBytes());
}
else {
log.error("Total AMQP melt down. This should never happen!");
}
}
catch(Exception e) {
log.error("AMQP send method Exception. Unable to get send.");
e.printStackTrace();
}
finally {
channel.close();
}
}
This is the connection class
#Service
public class PersistentConnection {
private static final Logger log = LoggerFactory.getLogger(PersistentConnection.class);
private static Connection myConnection = null;
private Boolean blocked = false;
#Autowired ApplicationConfiguration applicationConfiguration;
#PreDestroy
private void destroy() {
try {
myConnection.close();
} catch (IOException e) {
log.error("Unable to close AMQP Connection.");
e.printStackTrace();
}
}
public Connection getConnection( ) {
if (myConnection == null) {
start();
}
return myConnection;
}
private void start() {
log.debug("Building AMQP Connection");
ConnectionFactory factory = new ConnectionFactory();
String ipAddress = applicationConfiguration.getAMQPHost();
String user = applicationConfiguration.getAMQPUser();
String password = applicationConfiguration.getAMQPPassword();
String virtualHost = applicationConfiguration.getAMQPVirtualHost();
String port = applicationConfiguration.getAMQPPort();
try {
factory.setUsername(user);
factory.setPassword(password);
factory.setVirtualHost(virtualHost);
factory.setPort(Integer.parseInt(port));
factory.setHost(ipAddress);
myConnection = factory.newConnection();
}
catch (Exception e) {
log.error("Unable to initialise AMQP Connection.");
e.printStackTrace();
}
myConnection.addBlockedListener(new BlockedListener() {
public void handleBlocked(String reason) throws IOException {
// Connection is now blocked
log.warn("Message Server has blocked. It may be resource limitted.");
blocked = true;
}
public void handleUnblocked() throws IOException {
// Connection is now unblocked
log.warn("Message server is unblocked.");
blocked = false;
}
});
}
public Boolean isBlocked() {
return blocked;
}
}
I'm working on a aws lambda function that reads streams from Kinesis and insert the data in a database.
enter image description here
So the first scenario is when im sending only one recorde to kinesis everything goes as excpected and i can see the record in my database
The second scenario is when i send records as batch mode (1000 records), after the execution i check kinesis incaming data metric and i can see the all the 1000 records are arrived, next step is checking the lambda function logs, and there i can see that only some records are executed (2 or 3).
and that is confirmed when i check the database, only 2 or 3 records got inserted.
I can't understand why my lambda is not handling all the records ? the logs also are not telling me much, knowing that i'm handling all the exceptions so no error eccured on the lambda error count metric.
I cant really tell if kinesis is not invoking the lambda on all the records, or if it some kind of configuration that im midding or something else
Here is the methode that sends to kinesis from the batch :
#Override
public void sendChannelRecords(Channel channel) {
String kinesisStreamName = dataLakeConfig.getKinesisStreamName();
try {
byte[] bytes = (conversionToJsonUtils.convertObjectToJsonString(channel) + "\n").getBytes("UTF-8");
PutRecordRequest putRecordRequest = new PutRecordRequest()
.withPartitionKey(String.valueOf(random.nextInt()))
.withStreamName(kinesisStreamName)
.withData(ByteBuffer.wrap(bytes));
callKinesisAsync(putRecordRequest);
} catch (Exception e) {
log.error("an error has been occurred during sending Channel to Kinesis : " + e.getMessage());
}
}
The handle request of the lambda fucntion :
public class RocEventProcessorToTeradata implements RequestHandler<KinesisEvent, Object> {
/**
* This function write different Collections in Teradata warehouse.
* Pour cost reasons, we chose to use only a single Kinesis to receive all collections from ROC-API.
*/
private static Logger log = (Logger) LoggerFactory.getLogger(Logger.ROOT_LOGGER_NAME);
private static String LOG_LEVEL = System.getenv("LOGGING_LEVEL");
/**
* Note that : teradata connexion must be opened & closed inside the the handleRequest method only otherwise connexion remains opened !
*/
#SneakyThrows
public Object handleRequest(KinesisEvent event, Context context) {
log.setLevel(Level.valueOf(LOG_LEVEL));
TeradataParams teradataParams = buildTeradataParams();
List<UserRecord> userRecordsList = RecordDeaggregator.deaggregate(event.getRecords());
Connection conn = null;
Channel channel = new ConversionToJava<Channel>().getAsGivenEntity(new String(userRecordsList.get(0).getData().array()), Channel.class);
boolean isInstanceOfChannel = channel.getEpgid() != null;
try {
Class.forName("com.teradata.jdbc.TeraDriver");
conn = DriverManager.getConnection(teradataParams.getTeradataUrl(), teradataParams.getTeradataUser(), teradataParams.getTeradataPwd());
if (isInstanceOfChannel) {
InputFactory.getInputObject(InputType.CHANNEL).apply(conn, userRecordsList, teradataParams);
}
} catch (SQLException | ClassNotFoundException e) {
log.error(" an error has been occurred during connection opening Teradata : {}", e.getMessage());
throw e;
} finally {
//close resources associated to the connection as teradata doesn't allow more than 16 opened resources in the same connection
try {
if (conn != null) {
conn.close();
}
} catch (Exception e) {
log.error("an error has been occurred when closing Teradata connection : {}", e.getMessage());
}
}
return "";
}
private TeradataParams buildTeradataParams() {
String teradataUser = getTeradataStoreParam().get(TeradataConnectionParams.USERNAME_PARAM);
String teradataPwd = getTeradataStoreParam().get(TeradataConnectionParams.PASSWORD_PARAM);
String teradataUrl = getTeradataStoreParam().get(TeradataConnectionParams.DATABASE_URL);
String connurl = "jdbc:teradata://" + teradataUrl + "/database=My_DB,tmode=ANSI,charset=UTF8";
return TeradataParams.builder().teradataUser(teradataUser).teradataPwd(teradataPwd).teradataUrl(connurl).build();
}
private Map<String, String> getTeradataStoreParam() {
return new ParameterStoreImpl().
getParameterStore(
Lists.newArrayList(TeradataConnectionParams.USERNAME_PARAM,
TeradataConnectionParams.PASSWORD_PARAM,
TeradataConnectionParams.DATABASE_URL)
);
}
}
#Override
public void apply(Connection conn, List<UserRecord> userRecordsList, TeradataParams teradataParams) {
log.setLevel(Level.valueOf(LOG_LEVEL));
try {
Channel channel = new ConversionToJava<Channel>().getAsGivenEntity(new String(userRecordsList.get(0).getData().array()), Channel.class);
ChannelRepository channelRepository = new ChannelRepositoryImpl();
insertChannelStatement = conn.prepareStatement(ChannelQuery.INSERT_CHANNEL);
deleteChannelStatement = conn.prepareStatement(ChannelQuery.DELETE_CHANNEL);
rejectChannelStatement = conn.prepareStatement(ChannelQuery.INSERT_REJET);
try {
if (channel.isDeleted()) {
channelRepository.deleteChannel(channel, deleteChannelStatement);
} else {
channelRepository.executeChannel(channel, insertChannelStatement);
}
conn.commit();
} catch (ChannelException e) {
log.error("an error has been occured during Channel insertion in teradata {}", e.getMessage());
handleChannelReject(conn, rejectChannelStatement, channel, e);
}
} catch (SQLException e) {
log.error(" an error has been occurred during connection opening Teradata Channel: {}", e.getMessage());
} finally {
try {
closeStatements(insertChannelStatement, deleteChannelStatement, rejectChannelStatement);
} catch (Exception e) {
log.error("an error has been occurred when closing Teradata channel connection : {}", e.getMessage());
}
}
}
}
public class ChannelRepositoryImpl implements ChannelRepository {
private static Logger log = (Logger) LoggerFactory.getLogger(Logger.ROOT_LOGGER_NAME);
private static String LOG_LEVEL = System.getenv("LOGGING_LEVEL");
#Override
public void executeChannel(Channel channel, PreparedStatement statement) {
log.setLevel(Level.valueOf(LOG_LEVEL));
final FeedChannelPreparedStatement feedStatement = new FeedChannelPreparedStatement();
try {
feedStatement.feedChannelPreparedSatementInsert(statement, channel);
statement.executeBatch();
} catch (Exception e) {
System.out.println("Insert Channel Exception : "+ e.getMessage());
log.error("an error has been occurred during Channel insertion in Teradata : {}", e.getMessage());
throw new ChannelException(e.getMessage());
}
}
#Override
public void deleteChannel(Channel channel, PreparedStatement statement) {
log.setLevel(Level.valueOf(LOG_LEVEL));
final FeedChannelPreparedStatement feedStatement = new FeedChannelPreparedStatement();
try {
System.out.println("Delete Channel : "+ channel.getEpgid());
feedStatement.feedChannelPreparedSatementDelete(statement, channel);
statement.executeBatch();
} catch (Exception e) {
System.out.println("Delete Channel Exception: "+ channel.getEpgid());
log.error("an error has been occurred during Channel deleting in Teradata : {}", e.getMessage());
throw new ChannelException(e.getMessage());
}
}
}
public class FeedChannelPreparedStatement extends ChannelPreparedStatement {
void feedChannelPreparedSatementInsert(PreparedStatement statement, Channel channel) throws SQLException {
List<ProductCodeTable> products = ofNullable(channel.getProducts()).orElse(emptyList());
if (!products.isEmpty()) {
for (ProductCodeTable product : products) {
if (product.getId() != null) {
addStatementBatch(statement, channel, product.getId());
}
}
} else {
addStatementBatch(statement, channel, null);
}
}
void feedChannelPreparedSatementDelete(PreparedStatement statement, Channel channel) throws SQLException {
addStatementBatchForDelete(statement, channel.getEpgid());
}
}
public class ChannelPreparedStatement {
private static final int EPGID_ID_INDEX = 1;
private static final int SERVICE_NAME_INDEX = 2;
private static final int PRODUCT_ID_INDEX = 3;
private static final int ID_PROVENANCE_INDEX = 4;
private static final int TMS_CHARGEMENT_INDEX = 5;
void addStatementBatch(PreparedStatement ps, Channel channel, String productId) throws SQLException {
ps.setObject(EPGID_ID_INDEX, Integer.toString(channel.getEpgid()), Types.VARCHAR);
ps.setObject(SERVICE_NAME_INDEX, channel.getServiceName(), Types.VARCHAR);
ps.setObject(PRODUCT_ID_INDEX, productId != null ? productId : "", Types.VARCHAR);
ps.setInt(ID_PROVENANCE_INDEX, 68);
ps.setTimestamp(TMS_CHARGEMENT_INDEX, FormatUtil.getTmsChargement());
ps.addBatch();
}
void addStatementBatchForDelete(PreparedStatement ps, Integer epjId) throws SQLException {
ps.setObject(EPGID_ID_INDEX, Integer.toString(epjId), Types.VARCHAR);
ps.addBatch();
}
}
I'm trying to aply Redisson features for my project as message broker and I have a question. Is it possible to push Redisson to precceding recieved messages asynchronously? I have created a small example, sent 4 messages from different URL's. I expected, that Redisson proceeded them asynchronously, but it did it one by one.
Here the implementation:
public class RedisListenerServiceImpl implements MessageListener<String> {
private static final Logger log = LoggerFactory.getLogger(RedisListenerServiceImpl.class);
private final ObjectMapper objectMapper = new ObjectMapper();
#Override
public void onMessage(CharSequence channel, String stringMsg) {
log.info("Message received: {}", stringMsg);
MessageDto msg;
try {
msg = objectMapper.readValue(stringMsg, MessageDto.class);
} catch (final IOException e) {
log.error("Unable to deserialize message: {}", e.getMessage(), e);
return;
}
try {
//Do my stuff
} catch (Exception e) {
log.error("Unable to get service from factory: {}", e.getMessage(), e);
}
}
}
And the config:
#Configuration
public class RedisListenerConfig {
#Autowired
public RedisListenerConfig(RedissonClient redisClient,
MessageListener redisListenerService,
#Value("${redis.sub.key}") String redisSubKey) {
RTopic subscribeTopic = redisClient.getTopic(redisSubKey);
subscribeTopic.addListenerAsync(String.class, redisListenerService);
}
}
It's expected behavior. If you want your messages to be processed concurrently when the Listener onMessage() method is triggered, just use a thread pool to process it.
Since Redisson doesn't know how many threads you want to consume the triggered events, it leaves the implementation detail to you.
public class RedisListenerServiceImpl implements MessageListener<String> {
private static final Logger log = LoggerFactory.getLogger(RedisListenerServiceImpl.class);
private final ObjectMapper objectMapper = new ObjectMapper();
private final ExecutorService executorService = Executors.newFixedThreadPool(10);
#Override
public void onMessage(CharSequence channel, String stringMsg) {
log.info("Message received: {}", stringMsg);
MessageDto msg;
try {
msg = objectMapper.readValue(stringMsg, MessageDto.class);
executorService.submit(()->{
System.out.println("do something with message: "+msg);
});
} catch (final IOException e) {
log.error("Unable to deserialize message: {}", e.getMessage(), e);
return;
}
try {
//Do my stuff
} catch (Exception e) {
log.error("Unable to get service from factory: {}", e.getMessage(), e);
}
}
My server has few handlers
new ConnectionHandler(),
new MessageDecoder(),
new PacketHandler(),
new MessageEncoder()
It seems that last handler should be called when the server sends data to the client, but it never called.
Here is code of the handler
public class MessageEncoder extends ChannelOutboundHandlerAdapter {
#Override
public void write(ChannelHandlerContext ctx, Object msg, ChannelPromise promise) throws Exception {
System.out.println("from encoder");
NetCommand command = (NetCommand)msg;
byte[] data = JsonSerializer.getInstance().Serialize(command);
ByteBuf encoded = ctx.alloc().buffer(4+data.length);
encoded.writeInt(data.length).writeBytes(data);
ctx.writeAndFlush(encoded);
}
}
As you see i'm using POJO instead of ByteBuf, but it does not work. And even when I try to send ByteBuf the last handler is not called too.
Also here is how I send the data
#Override
public void run()
{
Packet packet;
NetCommand data;
Session session;
while ( isActive )
{
try {
session = (Session) this.sessionQueue.take();
packet = session.getWritePacketQueue().take();
data = packet.getData();
System.out.println("data code: "+data.netCode);
ChannelHandlerContext ctx = packet.getSender().getContext();
ctx.writeAndFlush(data);
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}
What did I do wrong?
Most likely the ChannelHandlerContext you use for writeAndFlush() belongs to a ChannelHandler which is befor your last handler in the pipeline. If you want to ensure that the write starts at the tail / end of the pipeline use ctx.channel().writeAndFlush(...)
I have a driver/main class in file like this. (Basically I'm trying to mix STORM & AKKA). In TenderEventSpout2 class, I am trying to send and receive message to/from an actor.
public class TenderEventSpout2 extends BaseRichSpout {
ActorSystemHandle actorSystemHandle;
ActorSystem _system;
ActorRef eventSpoutActor;
Future<Object> future;
Timeout timeout;
String result;
#Override
public void open(Map map, TopologyContext topologyContext, SpoutOutputCollector spoutOutputCollector) {
//String[] message = {"WATCH_DIR"};
timeout = new Timeout(Duration.create(60, "seconds"));
List<Object> messageList = new ArrayList<Object>();
messageList.add("WATCH_DIR");
messageList.add(this.inputDirName);
actorSystemHandle = new ActorSystemHandle();
_system = actorSystemHandle.getActorSystem();
eventSpoutActor = _system.actorOf(Props.create(EventSpoutActor.class));
future = Patterns.ask(eventSpoutActor, messageList, timeout);
}
#Override
public void nextTuple() {
String result = null;
try{
result = (String) Await.result(future, timeout.duration());
}
catch(Exception e){
e.printStackTrace();
}
}
My Actor is:
public class EventSpoutActor extends UntypedActor {
public ConcurrentLinkedQueue<String> eventQueue = new ConcurrentLinkedQueue<>();
Inbox inbox;
#Override
public void onReceive(Object message){// throws IOException {
if (message instanceof List<?>) {
System.out.println(((List<Object>)message).get(0)+"*******************");
if(((List<Object>)message).get(0).equals("WATCH_DIR")){
final List<Object> msg = (List<Object>)message;
Thread fileWatcher = new Thread(new Runnable(){
#Override
public void run() {
System.out.println(msg.get(1)+"*******************");
try {
String result = "Hello";
System.out.println("Before Sending Message *******************");
getSender().tell(result, getSelf());
}
catch (Exception e) {
getSender().tell(new akka.actor.Status.Failure(e), getSelf());
throw e;
}
}
});
fileWatcher.setDaemon(true);
fileWatcher.start();
System.out.println("Started file watcher");
}
}
else{
System.out.println("Unhandled !!");
unhandled(message);
}
}
}
I'm able to send message to my EventSpoutActor. But facing problem with receiving messages. Why is that?? I get the following message printed in console:
[EventProcessorActorSystem-akka.actor.default-dispatcher-3]
[akka://EventProcessorActorSystem/deadLetters] Message [java.lang.String]
from Actor[akka://EventProcessorActorSystem/user/$a#-1284357486] to
Actor[akka://EventProcessorActorSystem/deadLetters] was not delivered. [1]
dead letters encountered.
This logging can be turned off or adjusted with configuration settings
'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.
So, I found out why the messages were not delivered.
getSender().tell(result, getSelf());
This line which is supposed to send the messages to the sender, lost it's context data, when it is used inside Thread code:
Thread fileWatcher = new Thread(new Runnable(){
#Override
public void run() {
System.out.println(msg.get(1)+"*******************");
try {
String result = "Hello";
System.out.println("Before Sending Message *******************");
getSender().tell(result, getSelf());
When I moved the "tell" code outside the thread, it worked.