RabbitMQ Messaging with QPID 0.32 client - java

Using QPID Java Client i can only get messages delivered through the exchange to the bound queue using the following expanded syntax of AMQAnyDestination
Destination queue = new AMQAnyDestination( new AMQShortString("onms2"),
new AMQShortString("direct"),
new AMQShortString("Simon"),
true,
true,
new AMQShortString(""),
false,
bindvars);
If i attempt to use the different form which just specifies the address as follows it doesnt work:-
Destination queue = new AMQAnyDestination("onms2/Simon");
The message hits RabbitMQ ok but is not delivered.
Qpid 0.32 Client
Rabbit MQ 3.5.7
Exchange onms
Routing Key Simon
I have been using the qpid examples and modifying the ListSender example as below
package org.apache.qpid.example;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import javax.jms.Connection;
import javax.jms.Destination;
import javax.jms.Message;
import javax.jms.MessageProducer;
import javax.jms.Session;
import org.apache.qpid.client.AMQAnyDestination;
import org.apache.qpid.client.AMQConnection;
import org.apache.qpid.framing.AMQShortString;
import org.apache.qpid.jms.ListMessage;
public class ListSender {
public static void main(String[] args) throws Exception
{
Connection connection =
new AMQConnection("amqp://simon:simon#localhost/test?brokerlist='tcp://localhost:5672'");
AMQShortString a1 = new AMQShortString("");
AMQShortString a2 = new AMQShortString("");
AMQShortString[] bindvars = new AMQShortString[]{a1,a2};
boolean is_durable = true;
/*
Destination queue = new AMQAnyDestination( new AMQShortString("onms2"),
new AMQShortString("direct"),
new AMQShortString("Simon"),
true,
true,
new AMQShortString(""),
false,
bindvars);
*/
Session session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE);
Destination queue = new AMQAnyDestination("onms2/Simon");
//Destination queue = new AMQAnyDestination("amqp:OpenNMSExchange/Taylor; {create: always}");
//Destination queue = new AMQAnyDestination("OpenNMSExchange; {create: always}");
MessageProducer producer = session.createProducer(queue);
ListMessage m = ((org.apache.qpid.jms.Session)session).createListMessage();
m.setIntProperty("Id", 987654321);
m.setStringProperty("name", "WidgetSimon");
m.setDoubleProperty("price", 0.99);
List<String> colors = new ArrayList<String>();
colors.add("red");
colors.add("green");
colors.add("white");
m.add(colors);
Map<String,Double> dimensions = new HashMap<String,Double>();
dimensions.put("length",10.2);
dimensions.put("width",5.1);
dimensions.put("depth",2.0);
m.add(dimensions);
List<List<Integer>> parts = new ArrayList<List<Integer>>();
parts.add(Arrays.asList(new Integer[] {1,2,5}));
parts.add(Arrays.asList(new Integer[] {8,2,5}));
m.add(parts);
Map<String,Object> specs = new HashMap<String,Object>();
specs.put("colours", colors);
specs.put("dimensions", dimensions);
specs.put("parts", parts);
m.add(specs);
producer.send((Message)m);
System.out.println("Sent: " + m);
connection.close();
}
}
When it works using the expanded format of AMQAnyDestination the debug logs looks like this:-
163 [main] INFO org.apache.qpid.client.AMQConnection - Connection 1 now connected from /127.0.0.1:43298 to localhost/127.0.0.1:5672
163 [main] DEBUG org.apache.qpid.client.AMQConnection - Are we connected:true
163 [main] DEBUG org.apache.qpid.client.AMQConnection - Connected with ProtocolHandler Version:0-91
166 [main] DEBUG org.apache.qpid.client.AMQDestination - Based on direct://onms2/Simon/?routingkey='Simon'&exclusive='true'&autodelete='true' the selected destination syntax is BURL
169 [main] DEBUG org.apache.qpid.client.AMQConnectionDelegate_8_0 - Write channel open frame for channel id 1
184 [main] DEBUG org.apache.qpid.client.AMQSession - Created session:org.apache.qpid.client.AMQSession_0_8#1d251891
186 [IoReceiver - localhost/127.0.0.1:5672] DEBUG org.apache.qpid.client.protocol.AMQProtocolHandler - (1028176102)Method frame received: [ChannelOpenOkBody]
189 [IoReceiver - localhost/127.0.0.1:5672] DEBUG org.apache.qpid.client.protocol.AMQProtocolHandler - (1028176102)Method frame received: [BasicQosOkBodyImpl: ]
195 [main] DEBUG org.apache.qpid.client.BasicMessageProducer_0_8 - MessageProducer org.apache.qpid.client.BasicMessageProducer_0_8#668bc3d5 using publish mode : ASYNC_PUBLISH_ALL
206 [main] DEBUG org.apache.qpid.client.BasicMessageProducer_0_8 - Sending content body frames to direct://onms2/Simon/?routingkey='Simon'&exclusive='true'&autodelete='true'
206 [main] DEBUG org.apache.qpid.client.BasicMessageProducer_0_8 - Sending content header frame to direct://onms2/Simon/?routingkey='Simon'&exclusive='true'&autodelete='true'
207 [main] DEBUG org.apache.qpid.framing.FieldTable - FieldTable::writeToBuffer: Writing encoded length of 67...
When it fails using the shorter syntax the debug log looks like this:-
149 [main] INFO org.apache.qpid.client.AMQConnection - Connection 1 now connected from /127.0.0.1:36940 to localhost/127.0.0.1:5672
149 [main] DEBUG org.apache.qpid.client.AMQConnection - Are we connected:true
149 [main] DEBUG org.apache.qpid.client.AMQConnection - Connected with ProtocolHandler Version:0-91
153 [main] DEBUG org.apache.qpid.client.AMQConnectionDelegate_8_0 - Write channel open frame for channel id 1
169 [main] DEBUG org.apache.qpid.client.AMQSession - Created session:org.apache.qpid.client.AMQSession_0_8#6bdf28bb
170 [IoReceiver - localhost/127.0.0.1:5672] DEBUG org.apache.qpid.client.protocol.AMQProtocolHandler - (472294496)Method frame received: [ChannelOpenOkBody]
171 [IoReceiver - localhost/127.0.0.1:5672] DEBUG org.apache.qpid.client.protocol.AMQProtocolHandler - (472294496)Method frame received: [BasicQosOkBodyImpl: ]
179 [main] DEBUG org.apache.qpid.client.AMQDestination - Based on onms2/Simon the selected destination syntax is ADDR
182 [main] DEBUG org.apache.qpid.client.AMQConnectionDelegate_8_0 - supportsIsBound: false
182 [main] DEBUG org.apache.qpid.client.AMQConnectionDelegate_8_0 - supportsIsBound: false
182 [main] DEBUG org.apache.qpid.client.AMQConnectionDelegate_8_0 - supportsIsBound: false
184 [IoReceiver - localhost/127.0.0.1:5672] DEBUG org.apache.qpid.client.protocol.AMQProtocolHandler - (472294496)Method frame received: [ExchangeDeclareOkBodyImpl: ]
184 [main] DEBUG org.apache.qpid.client.BasicMessageProducer_0_8 - MessageProducer org.apache.qpid.client.BasicMessageProducer_0_8#15975490 using publish mode : ASYNC_PUBLISH_ALL
195 [main] DEBUG org.apache.qpid.client.BasicMessageProducer_0_8 - Sending content body frames to 'onms2'/'Simon'; None
195 [main] DEBUG org.apache.qpid.client.BasicMessageProducer_0_8 - Sending content header frame to 'onms2'/'Simon'; None
196 [main] DEBUG org.apache.qpid.framing.FieldTable - FieldTable::writeToBuffer: Writing encoded length of 90...
196 [main] DEBUG org.apache.qpid.framing.FieldTable - {Id=[INT: 987654321], name=[LONG_STRING: WidgetSimon], price=[DOUBLE: 0.99], qpid.subject=[LONG_STRING: Simon], JMS_QPID_DESTTYPE=[INT: 2]}
198 [main] DEBUG org.apache.qpid.client.AMQSession - Closing session: org.apache.qpid.client.AMQSession_0_8#6bdf28bb
198 [main] DEBUG org.apache.qpid.client.protocol.AMQProtocolSession - closeSession called on protocol session for session 1
Ideally i need the shorter syntax to work as this is what is used by another application I am using which is posting messages using AMQP.
I suspect there is something incorrect with the syntax i am using to define the address but i cant see what it is.
I have tried:-
amqp:onms2/Simon
ADDR:onms2/Simon
I have confirmed the rabbit config is correct by testing using both a standalone java client using qpid and also using both perl (using net_amqp) and python (using pika). So i dont think its that.
Any gudiance appreciated.
EDIT:-
Found some extra configuration parameters on the QPID website i had missed
When i configure the address as follows it works!
onms3/Simon; {'create':'always','node':{'type':'topic'} }
Detail
<name> [ / <subject> ] ; {
create: always | sender | receiver | never,
delete: always | sender | receiver | never,
assert: always | sender | receiver | never,
mode: browse | consume,
node: {
type: queue | topic,
durable: True | False,
x-declare: { ... <declare-overrides> ... },
x-bindings: [<binding_1>, ... <binding_n>]
},
link: {
name: <link-name>,
durable: True | False,
reliability: unreliable | at-most-once | at-least-once | exactly-once,
x-declare: { ... <declare-overrides> ... },
x-bindings: [<binding_1>, ... <binding_n>],
x-subscribe: { ... <subscribe-overrides> ... }
}
}
Simon

EDIT:-
Found some extra configuration parameters on the QPID website i had missed
When i configure the address as follows it works!
onms3/Simon; {'create':'always','node':{'type':'topic'} }
Detail
<name> [ / <subject> ] ; {
create: always | sender | receiver | never,
delete: always | sender | receiver | never,
assert: always | sender | receiver | never,
mode: browse | consume,
node: {
type: queue | topic,
durable: True | False,
x-declare: { ... <declare-overrides> ... },
x-bindings: [<binding_1>, ... <binding_n>]
},
link: {
name: <link-name>,
durable: True | False,
reliability: unreliable | at-most-once | at-least-once | exactly-once,
x-declare: { ... <declare-overrides> ... },
x-bindings: [<binding_1>, ... <binding_n>],
x-subscribe: { ... <subscribe-overrides> ... }
}
}

Related

Unable to reset partition offset with CooperativeStickyAssignor

I'm trying to reset offset to a partition to 0, like this:
final KafkaConsumer<String, String> stateConsumer = new KafkaConsumer<>(stateConsumerProperties.getConsumerProps());
stateConsumer.subscribe(STATE_TOPIC);
...
stateConsumer.seekToBeginning(stateConsumer.assignment());
...
stateConsumer.poll(Duration.ofMillis(1000)) // timeout too long, testing only
.forEach(record -> {
log.info("Warmup state read: " + record.value() + ", partition: " + record.partition());
stateMessages.add(record.value());
});
Consumer config is only this:
this.stateConsumerProperties.setConsumer(Map.of(
ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "kafka:9092",
ConsumerConfig.GROUP_ID_CONFIG, "state",
ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest",
ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringDeserializer",
ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringDeserializer",
ConsumerConfig.MAX_POLL_RECORDS_CONFIG, "30",
ConsumerConfig.FETCH_MAX_WAIT_MS_CONFIG, "2000",
ConsumerConfig.PARTITION_ASSIGNMENT_STRATEGY_CONFIG, CustomPartitionAssignor.class.getName()
));
The "custom" assignor is only for logging purposes, these are the only overrides:
public class CustomPartitionAssignor extends CooperativeStickyAssignor {
...
#Override
public GroupAssignment assign(Cluster metadata, GroupSubscription groupSubscription) {
return super.assign(metadata, groupSubscription);
}
#Override
public void onAssignment(Assignment assignment, ConsumerGroupMetadata metadata) {
super.onAssignment(assignment, metadata);
Arrays.toString(ASSIGNED_PARTITIONS.toArray()));
ASSIGNED_PARTITIONS = assignment.partitions();
log.info("New Assigned partitions: " + assignment.partitions());
}
...
}
I have only two identical consumers for testing and each consumer runs this code on join.
The first joiner has no issue since there is not state yet ("consumers" read another topic and produce "state" to state topic).
Problem is, this is what I get in the log when the second consumer joins:
consumer_2 | 2022-10-10 18:46:26.833 INFO 7 --- [pool-1-thread-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-state-2, groupId=state] Setting offset for partition state-1 to the committed offset FetchPosition{offset=361, offsetEpoch=Optional[0], currentLeader=LeaderAndEpoch{leader=Optional[179bd3c6448e:9092 (id: 1001 rack: null)], epoch=0}}
consumer_2 | 2022-10-10 18:46:26.875 INFO 7 --- [pool-1-thread-1] c.company.fdi.poc.kafka.service.Consumer : Warmup state read: state 723, partition: 1
consumer_2 | 2022-10-10 18:46:26.876 INFO 7 --- [pool-1-thread-1] c.company.fdi.poc.kafka.service.Consumer : Warmup state read: state 725, partition: 1
consumer_2 | 2022-10-10 18:46:26.876 INFO 7 --- [pool-1-thread-1] c.company.fdi.poc.kafka.service.Consumer : Warmup state read: state 727, partition: 1
State "production" is just an atomic counter that starts at 0.
What I want to happen is that when a new consumer joins, it resets its assigned partition offset to 0 and starts reading from the first record.
I have a suspicion that this has something to do with the CooperativeStickyAssignor, but I have no clue as of now.
You can assume 2 consumers, 2 partitions for state topic if this helps. Cluster should be as balanced as possible.
Start is two partitions one consumer, then I add another consumer.
Any help much appreciated, thanks in advance.

In kafka stream topology with 'exactly_once' processing guarantee, message lost on exception

I have a requirement where I need to process messages from Kafka without losing any message and also need to maintain the message order. Therefore, I used transactions and enabled 'exactly_once' processing guarantee in my Kafka streams topology. As I assume that the topology processing will be 'all or nothing', that the message offset is committed only after the last node successfully processed the message.
However in a failure scenario, for example when the database is down and the processor fails to store message and throws an exception. At this point, the topology dies as intended and is recreated automatically on rebalance. I assume that the topology should either re-consume the original message again from the Kafka topic OR on application restart, it should re-consume that original message from Kafka topic. However, it seems that original message disappears and is never consumed or processed after that topology died.
What do I need to do to reprocess the original message sent to Kafka topic? Or what Kafka configuration requires change? Do I need manually assign a state store and keep track of messages processed on a changelog topic?
Topology:
#Singleton
public class EventTopology extends Topology {
private final Deserializer<String> deserializer = Serdes.String().deserializer();
private final Serializer<String> serializer = Serdes.String().serializer();
private final EventLogMessageSerializer eventLogMessageSerializer;
private final EventLogMessageDeserializer eventLogMessageDeserializer;
private final EventLogProcessorSupplier eventLogProcessorSupplier;
#Inject
public EventTopology(EventsConfig eventsConfig,
EventLogMessageSerializer eventLogMessageSerializer,
EventLogMessageDeserializer eventLogMessageDeserializer,
EventLogProcessorSupplier eventLogProcessorSupplier) {
this.eventLogMessageSerializer = eventLogMessageSerializer;
this.eventLogMessageDeserializer = eventLogMessageDeserializer;
this.eventLogProcessorSupplier = eventLogProcessorSupplier;
init(eventsConfig);
}
private void init(EventsConfig eventsConfig) {
var topics = eventsConfig.getTopicConfig().getTopics();
String eventLog = topics.get("eventLog");
addSource("EventsLogSource", deserializer, eventLogMessageDeserializer, eventLog)
.addProcessor("EventLogProcessor", eventLogProcessorSupplier, "EventsLogSource");
}
}
Processor:
#Singleton
#Slf4j
public class EventLogProcessor implements Processor<String, EventLogMessage> {
private final EventLogService eventLogService;
private ProcessorContext context;
#Inject
public EventLogProcessor(EventLogService eventLogService) {
this.eventLogService = eventLogService;
}
#Override
public void init(ProcessorContext context) {
this.context = context;
}
#Override
public void process(String key, EventLogMessage value) {
log.info("Processing EventLogMessage={}", value);
try {
eventLogService.storeInDatabase(value);
context.commit();
} catch (Exception e) {
log.warn("Failed to process EventLogMessage={}", value, e);
throw e;
}
}
#Override
public void close() {
}
}
Configuration:
eventsConfig:
saveTopicsEnabled: false
topologyConfig:
environment: "LOCAL"
broker: "localhost:9093"
enabled: true
initialiseWaitInterval: 3 seconds
applicationId: "eventsTopology"
config:
auto.offset.reset: latest
session.timeout.ms: 6000
fetch.max.wait.ms: 7000
heartbeat.interval.ms: 5000
connections.max.idle.ms: 7000
security.protocol: SSL
key.serializer: org.apache.kafka.common.serialization.StringSerializer
value.serializer: org.apache.kafka.common.serialization.StringSerializer
max.poll.records: 5
processing.guarantee: exactly_once
metric.reporters: com.simple.metrics.kafka.DropwizardReporter
default.deserialization.exception.handler: org.apache.kafka.streams.errors.LogAndContinueExceptionHandler
enable.idempotence: true
request.timeout.ms: 8000
acks: all
batch.size: 16384
linger.ms: 1
enable.auto.commit: false
state.dir: "/tmp"
topicConfig:
topics:
eventLog: "EVENT-LOG-LOCAL"
kafkaTopicConfig:
partitions: 18
replicationFactor: 1
config:
retention.ms: 604800000
Test:
Feature: Feature covering the scenarios to process event log messages produced by external client.
Background:
Given event topology is healthy
Scenario: event log messages produced are successfully stored in the database
Given database is down
And the following event log messages are published
| deptId | userId | eventType | endDate | eventPayload_partner |
| dept-1 | user-1234 | CREATE | 2021-04-15T00:00:00Z | PARTNER-1 |
When database is up
And database is healthy
Then event log stored in the database as follows
| dept_id | user_id | event_type | end_date | event_payload |
| dept-1 | user-1234 | CREATE | 2021-04-15T00:00:00Z | {"partner":"PARTNER-1"} |
Logs:
INFO [data-plane-kafka-request-handler-1] kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 0]: Preparing to rebalance group eventsTopology in state PreparingRebalance with old generation 0 (__consumer_offsets-0) (reason: Adding new member eventsTopology-57fdac0e-09fb-4aa0-8b0b-7e01809b31fa-StreamThread-1-consumer-96a3e980-4286-461e-8536-5f04ccb2c778 with group instance id None)
INFO [executor-Rebalance] kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 0]: Stabilized group eventsTopology generation 1 (__consumer_offsets-0)
INFO [data-plane-kafka-request-handler-2] kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 0]: Assignment received from leader for group eventsTopology for generation 1
INFO [data-plane-kafka-request-handler-1] kafka.coordinator.transaction.TransactionCoordinator - [TransactionCoordinator id=0] Initialized transactionalId eventsTopology-0_0 with producerId 0 and producer epoch 0 on partition __transaction_state-4
INFO [data-plane-kafka-request-handler-6] kafka.coordinator.transaction.TransactionCoordinator - [TransactionCoordinator id=0] Initialized transactionalId eventsTopology-0_1 with producerId 1 and producer epoch 0 on partition __transaction_state-3
...
INFO [data-plane-kafka-request-handler-0] kafka.coordinator.transaction.TransactionCoordinator - [TransactionCoordinator id=0] Initialized transactionalId eventsTopology-0_16 with producerId 17 and producer epoch 0 on partition __transaction_state-37
INFO [data-plane-kafka-request-handler-4] kafka.coordinator.transaction.TransactionCoordinator - [TransactionCoordinator id=0] Initialized transactionalId eventsTopology-1_1 with producerId 18 and producer epoch 0 on partition __transaction_state-42
INFO [data-plane-kafka-request-handler-6] kafka.coordinator.transaction.TransactionCoordinator - [TransactionCoordinator id=0] Initialized transactionalId eventsTopology-1_0 with producerId 19 and producer epoch 0 on partition __transaction_state-43
...
INFO [data-plane-kafka-request-handler-3] kafka.coordinator.transaction.TransactionCoordinator - [TransactionCoordinator id=0] Initialized transactionalId eventsTopology-1_17 with producerId 34 and producer epoch 0 on partition __transaction_state-45
INFO [data-plane-kafka-request-handler-5] kafka.coordinator.transaction.TransactionCoordinator - [TransactionCoordinator id=0] Initialized transactionalId eventsTopology-1_16 with producerId 35 and producer epoch 0 on partition __transaction_state-46
INFO [pool-26-thread-1] ManagerClient - Manager request {uri:http://localhost:8081/healthcheck, method:GET, body:'', headers:{}}
INFO [pool-26-thread-1] ManagerClient - Manager response from with body {"Database":{"healthy":true},"eventsTopology":{"healthy":true}}
INFO [dw-admin-130] KafkaConnectionCheck - successfully connected to kafka broker: localhost:9093
INFO [kafka-producer-network-thread | EVENT-LOG-LOCAL-test-client-id] LocalTestEnvironment - Message: ProducerRecord(topic=EVENT-LOG-LOCAL, partition=null, headers=RecordHeaders(headers = [], isReadOnly = true), key=null, value={"endDate":1618444800000,"deptId":"dept-1","userId":"user-1234","eventType":"CREATE","eventPayload":{"previousEndDate":null,"partner":"PARTNER-1","info":null}}, timestamp=null) pushed onto topic: EVENT-LOG-LOCAL
INFO [eventsTopology-b21df600-cd39-4c9d-9e7a-f55f53ac9fd3-StreamThread-1] EventLogProcessor - Processing EventLogMessage=EventLogMessage(endDate=Thu Apr 15 01:00:00 BST 2021, deptId=dept-1, userId=user-1234, eventType=CREATE, eventPayload=EventLogMessage.EventPayload(previousEndDate=null, partner=PARTNER-1, info=null))
WARN [eventsTopology-b21df600-cd39-4c9d-9e7a-f55f53ac9fd3-StreamThread-1] EventLogProcessor - Failed to process EventLogMessage=EventLogMessage(endDate=Thu Apr 15 01:00:00 BST 2021, deptId=dept-1, userId=user-1234, eventType=CREATE, eventPayload=EventLogMessage.EventPayload(previousEndDate=null, partner=PARTNER-1, info=null))
exceptions.NoHostAvailableException: All host(s) tried for query failed (no host was tried)
at manager.service.EventLogService.storeInDatabase(EventLogService.java:24)
at manager.topology.processor.EventLogProcessor.process(EventLogProcessor.java:47)
at manager.topology.processor.EventLogProcessor.process(EventLogProcessor.java:19)
at org.apache.kafka.streams.processor.internals.ProcessorNode.lambda$process$2(ProcessorNode.java:142)
at org.apache.kafka.streams.processor.internals.metrics.StreamsMetricsImpl.maybeMeasureLatency(StreamsMetricsImpl.java:836)
at org.apache.kafka.streams.processor.internals.ProcessorNode.process(ProcessorNode.java:142)
at org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:236)
at org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:216)
at org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:168)
at org.apache.kafka.streams.processor.internals.SourceNode.process(SourceNode.java:96)
at org.apache.kafka.streams.processor.internals.StreamTask.lambda$process$1(StreamTask.java:679)
at org.apache.kafka.streams.processor.internals.metrics.StreamsMetricsImpl.maybeMeasureLatency(StreamsMetricsImpl.java:836)
at org.apache.kafka.streams.processor.internals.StreamTask.process(StreamTask.java:679)
at org.apache.kafka.streams.processor.internals.TaskManager.process(TaskManager.java:1033)
at org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:690)
at org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:551)
at org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:510)
ERROR [eventsTopology-b21df600-cd39-4c9d-9e7a-f55f53ac9fd3-StreamThread-1] org.apache.kafka.streams.processor.internals.TaskManager - stream-thread [eventsTopology-b21df600-cd39-4c9d-9e7a-f55f53ac9fd3-StreamThread-1] Failed to process stream task 0_8 due to the following error:
org.apache.kafka.streams.errors.StreamsException: Exception caught in process. taskId=0_8, processor=EventsLogSource, topic=EVENT-LOG-LOCAL, partition=8, offset=0, stacktrace=exceptions.NoHostAvailableException: All host(s) tried for query failed (no host was tried)
ERROR [eventsTopology-b21df600-cd39-4c9d-9e7a-f55f53ac9fd3-StreamThread-1] org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [eventsTopology-b21df600-cd39-4c9d-9e7a-f55f53ac9fd3-StreamThread-1] Encountered the following exception during processing and the thread is going to shut down:
org.apache.kafka.streams.errors.StreamsException: Exception caught in process. taskId=0_8, processor=EventsLogSource, topic=EVENT-LOG-LOCAL, partition=8, offset=0, stacktrace=exceptions.NoHostAvailableException: All host(s) tried for query failed (no host was tried)
ERROR [eventsTopology-b21df600-cd39-4c9d-9e7a-f55f53ac9fd3-StreamThread-1] org.apache.kafka.streams.KafkaStreams - stream-client [eventsTopology-b21df600-cd39-4c9d-9e7a-f55f53ac9fd3] All stream threads have died. The instance will be in error state and should be closed.
Exception: java.lang.IllegalStateException thrown from the UncaughtExceptionHandler in thread "eventsTopology-b21df600-cd39-4c9d-9e7a-f55f53ac9fd3-StreamThread-1"
INFO [executor-Heartbeat] kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 0]: Member eventsTopology-b21df600-cd39-4c9d-9e7a-f55f53ac9fd3-StreamThread-1-consumer-f11ca299-2a68-4317-a559-dd1b96cd431f in group eventsTopology has failed, removing it from the group
INFO [executor-Heartbeat] kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 0]: Preparing to rebalance group eventsTopology in state PreparingRebalance with old generation 1 (__consumer_offsets-0) (reason: removing member eventsTopology-b21df600-cd39-4c9d-9e7a-f55f53ac9fd3-StreamThread-1-consumer-f11ca299-2a68-4317-a559-dd1b96cd431f on heartbeat expiration)
INFO [data-plane-kafka-request-handler-2] kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 0]: Stabilized group eventsTopology generation 2 (__consumer_offsets-0)
INFO [data-plane-kafka-request-handler-6] kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 0]: Assignment received from leader for group eventsTopology for generation 2
INFO [data-plane-kafka-request-handler-0] kafka.coordinator.transaction.TransactionCoordinator - [TransactionCoordinator id=0] Initialized transactionalId eventsTopology-0_0 with producerId 0 and producer epoch 1 on partition __transaction_state-4
...
INFO [data-plane-kafka-request-handler-0] kafka.coordinator.transaction.TransactionCoordinator - [TransactionCoordinator id=0] Initialized transactionalId eventsTopology-1_16 with producerId 35 and producer epoch 1 on partition __transaction_state-46
INFO [main] Cluster - New databse host localhost/127.0.0.1:59423 added
com.jayway.awaitility.core.ConditionTimeoutException: Condition defined as a lambda expression in steps.EventLogSteps
Expecting:
<0>
to be equal to:
<1>
but was not. within 20 seconds.

Fetch offset 5705 is out of range for partition , resetting offset

I am getting below info message every time in kafka consumer.
2020-07-04 14:54:27.640 INFO 1 --- [istener-0-0-C-1] c.n.o.c.h.p.n.PersistenceKafkaConsumer : beginning to consume batch messages , Message Count :11
2020-07-04 14:54:27.809 INFO 1 --- [istener-0-0-C-1] c.n.o.c.h.p.n.PersistenceKafkaConsumer : Execution Time :169
2020-07-04 14:54:27.809 INFO 1 --- [istener-0-0-C-1] essageListenerContainer$ListenerConsumer : Committing: {nbi.cm.changes.mo.test23-1=OffsetAndMetadata{offset=5705, leaderEpoch=null, metadata=''}}
2020-07-04 14:54:27.812 INFO 1 --- [istener-0-0-C-1] c.n.o.c.h.p.n.PersistenceKafkaConsumer : Acknowledgment Success
2020-07-04 14:54:27.813 INFO 1 --- [istener-0-0-C-1] o.a.k.c.consumer.internals.Fetcher : [Consumer clientId=consumer-1, groupId=cm-persistence-notification] Fetch offset 5705 is out of range for partition nbi.cm.changes.mo.test23-1, resetting offset
2020-07-04 14:54:27.820 INFO 1 --- [istener-0-0-C-1] o.a.k.c.c.internals.SubscriptionState : [Consumer clientId=consumer-1, groupId=cm-persistence-notification] Resetting offset for partition nbi.cm.changes.mo.test23-1 to offset 666703.
Got OFFSET_OUT_OF_RANGE error in debug log and resetting to some other partition that actually not exist. Same all messages able to receive in consumer console.
But actually I committed offset before that only , offset are available in kafka , log retention policy is 24hr, so it's not deleted in kafka.
In debug log, I got below messages:
beginning to consume batch messages , Message Count :710
2020-07-02 04:58:31.486 DEBUG 1 --- [ce-notification] o.a.kafka.clients.FetchSessionHandler : [Consumer clientId=consumer-1, groupId=cm-persistence-notification] Node 1002 sent an incremental fetch response for session 253529272 with 1 response partition(s)
2020-07-02 04:58:31.486 DEBUG 1 --- [ce-notification] o.a.k.c.consumer.internals.Fetcher : [Consumer clientId=consumer-1, groupId=cm-persistence-notification] Fetch READ_UNCOMMITTED at offset 11372 for partition nbi.cm.changes.mo.test12-1 returned fetch data (error=OFFSET_OUT_OF_RANGE, highWaterMark=-1, lastStableOffset = -1, logStartOffset = -1, preferredReadReplica = absent, abortedTransactions = null, recordsSizeInBytes=0)
When all we will get OFFSET_OUT_OF_RANGE.
Listener Class :
#KafkaListener( id = "batch-listener-0", topics = "topic1", groupId = "test", containerFactory = KafkaConsumerConfiguration.CONTAINER_FACTORY_NAME )
public void receive(
#Payload List<String> messages,
#Header( KafkaHeaders.RECEIVED_MESSAGE_KEY ) List<String> keys,
#Header( KafkaHeaders.RECEIVED_PARTITION_ID ) List<Integer> partitions,
#Header( KafkaHeaders.RECEIVED_TOPIC ) List<String> topics,
#Header( KafkaHeaders.OFFSET ) List<Long> offsets,
Acknowledgment ack )
{
long startTime = System.currentTimeMillis();
handleNotifications( messages ); // will take more than 5s to process all messages
long endTime = System.currentTimeMillis();
long timeElapsed = endTime - startTime;
LOGGER.info( "Execution Time :{}", timeElapsed );
ack.acknowledge();
LOGGER.info( "Acknowledgment Success" );
}
Do i need to close consumer here , i thought spring-kafka automatically take care those , if no could you please tell how to close in apring-kafka and also how to check if rebalance happened or not , because in DEBUG log not able to see any log related to rebalance.
I think your consumer may be rebalancing, because you are not calling consumer.close() at the end of your process.
This is a guess, but if the retention policy isn't kicking in (and the logs are not being deleted), this is the only reason I can tell for that behaviour.
Update:
As you set them as #KafkaListeners, you could just call stop() on the KafkaListenerEndpointRegistry: kafkaListenerEndpointRegistry.stop()

java.lang.NullPointerException:null at com.google.common.base.Preconditions.checkNotNull(Preconditions.java:877) in pulsar-io-jdbc connecttor

I am Using Apache Pulsar v2.4.2.And taken refrence of pulsar-io-jdbc and created custom sink connector packaged nar.
And running the following commands to upload schema and create connector
1.Uploaded AVRO Schema.
bin/pulsar-admin schemas upload myjdbc --filename ./connectors/pulsar-io-jdbc-schema.conf
Here is the avro schema:
{
"type": "AVRO",
"schema": "{\"type\":\"record\",\"name\":\"Test\",\"fields\":[{\"name\":\"id\",\"type\":[\"null\",\"int\"]},{\"name\":\"name\",\"type\":[\"null\",\"string\"]}]}",
"properties": {}
}
2.Create connector
bin/pulsar-admin sinks localrun --archive ./connectors/pulsar-io-jdbc-2.4.2.nar --inputs myjdbc --name myjdbc --sink-config-file ./connectors/pulsar-io-jdbc.yaml
And onces I am sending message to topic "myjdbc" from producer.SendAsync(message).
where message is
for (int i = 0; i < count; i++)
{
JObject obj = new JObject();
obj.Add("id", i);
obj.Add("name", "testing topic" + i);
var str = obj.ToString(Formatting.None);
var messageId = await producer.SendAsync(Encoding.UTF8.GetBytes(str));
Console.WriteLine($"MessageId is: '{messageId}'");
}
Pulsar localrun connector is getting the following exception:
19:48:37.437 [pulsar-client-io-1-1] INFO org.apache.pulsar.client.impl.ClientCnx - [id: 0x557fb218, L:/127.0.0.1:39488 - R:localhost/127.0.0.1:6650] Connected through proxy to target broker at 192.168.1.97:6650
19:48:37.445 [pulsar-client-io-1-1] INFO org.apache.pulsar.client.impl.ConsumerImpl - [myjdbc][public/default/myjdbc] Subscribing to topic on cnx [id: 0x557fb218, L:/127.0.0.1:39488 - R:localhost/127.0.0.1:6650]
19:48:37.503 [pulsar-client-io-1-1] INFO org.apache.pulsar.client.impl.ConsumerImpl - [myjdbc][public/default/myjdbc] Subscribed to topic on localhost/127.0.0.1:6650 -- consumer: 0
19:48:37.557 [pulsar-client-io-1-1] INFO com.scurrilous.circe.checksum.Crc32cIntChecksum - SSE4.2 CRC32C provider initialized
19:48:37.715 [pulsar-client-io-1-1] INFO org.apache.pulsar.client.impl.ConsumerImpl - [myjdbc] [public/default/myjdbc] Closed consumer
19:48:37.942 [main] INFO org.apache.pulsar.functions.LocalRunner - RuntimeSpawner quit because of
java.lang.NullPointerException: null
at com.google.common.base.Preconditions.checkNotNull(Preconditions.java:877) ~[com.google.guava-guava-25.1-jre.jar:?]
at com.google.common.cache.LocalCache.get(LocalCache.java:3950) ~[com.google.guava-guava-25.1-jre.jar:?]
at com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:3973) ~[com.google.guava-guava-25.1-jre.jar:?]
at com.google.common.cache.LocalCache$LocalLoadingCache.get(LocalCache.java:4957) ~[com.google.guava-guava-25.1-jre.jar:?]
at org.apache.pulsar.client.impl.schema.StructSchema.decode(StructSchema.java:94) ~[org.apache.pulsar-pulsar-client-original-2.4.2.jar:2.4.2]
at org.apache.pulsar.client.impl.schema.AutoConsumeSchema.decode(AutoConsumeSchema.java:72) ~[org.apache.pulsar-pulsar-client-original-2.4.2.jar:2.4.2]
at org.apache.pulsar.client.impl.schema.AutoConsumeSchema.decode(AutoConsumeSchema.java:36) ~[org.apache.pulsar-pulsar-client-original-2.4.2.jar:2.4.2]
at org.apache.pulsar.client.api.Schema.decode(Schema.java:97) ~[org.apache.pulsar-pulsar-client-api-2.4.2.jar:2.4.2]
at org.apache.pulsar.client.impl.MessageImpl.getValue(MessageImpl.java:268) ~[org.apache.pulsar-pulsar-client-original-2.4.2.jar:2.4.2]
at org.apache.pulsar.functions.source.PulsarRecord.getValue(PulsarRecord.java:74) ~[org.apache.pulsar-pulsar-functions-instance-2.4.2.jar:2.4.2]
at org.apache.pulsar.functions.instance.JavaInstanceRunnable.readInput(JavaInstanceRunnable.java:473) ~[org.apache.pulsar-pulsar-functions-instance-2.4.2.jar:2.4.2]
at org.apache.pulsar.functions.instance.JavaInstanceRunnable.run(JavaInstanceRunnable.java:246) ~[org.apache.pulsar-pulsar-functions-instance-2.4.2.jar:2.4.2]
at java.lang.Thread.run(Thread.java:748) ~[?:1.8.0_232]
19:49:03.105 [function-timer-thread-5-1] ERROR org.apache.pulsar.functions.runtime.RuntimeSpawner - public/default/myjdbc-java.lang.NullPointerException Function Container is dead with exception.. restarting
And also write() method is not triggered.

What causes a BAD Command Argument Error coming from exchange server?

I am writing a java program to iterate through all emails on an exchange server and move those to a folder. I am properly iterating through the emails and have found the attachment from the Multipart message, but when it comes to the point where I need to save the file. I am getting the error: "BAD Command Argument Error. 11". I have searched for the past few days and have not found what is causing this. What is causing this issue?
Here is my code:
import java.io.File;
import java.io.FileOutputStream;
import java.io.OutputStream;
import java.util.Enumeration;
import java.util.Properties;
import javax.mail.Address;
import javax.mail.Flags;
import javax.mail.Folder;
import javax.mail.Header;
import javax.mail.Message;
import javax.mail.MessagingException;
import javax.mail.Multipart;
import javax.mail.Part;
import javax.mail.PasswordAuthentication;
import javax.mail.Session;
import javax.mail.Store;
import javax.mail.internet.MimeBodyPart;
import javax.mail.search.FlagTerm;
import com.sun.mail.util.BASE64DecoderStream;
public class EmailAttachmentMoverIMAP {
private static Store store = null;
public static void main(String args[]) {
store = buildStore();
Message[] messages = getMessages();
iterateAttachments(messages);
try{store.close();}catch (MessagingException e){e.printStackTrace();}
}
public static Store buildStore(){
Properties properties = System.getProperties();
properties = new Properties();
properties.setProperty("mail.host", "MYHOST");
properties.setProperty("mail.port", "143");
properties.setProperty("mail.transport.protocol", "imap");
Session session = Session.getInstance(properties,
new javax.mail.Authenticator() {
protected PasswordAuthentication getPasswordAuthentication() {
return new PasswordAuthentication("USER", "PASS");
}
});
try {
store = session.getStore("imap");
}
catch (Exception e){
e.printStackTrace();
}
return store;
}
public static Message[] getMessages(){
Message[] messages = null;
try
{
store.connect();
Folder inbox = store.getFolder("INBOX");
inbox.open(Folder.READ_WRITE);
Flags seen = new Flags(Flags.Flag.SEEN);
FlagTerm unseenFlagTerm = new FlagTerm(seen, false);
messages = inbox.search(unseenFlagTerm);
if (messages.length == 0) System.out.println("No messages found.");
}
catch (MessagingException e)
{
// TODO Auto-generated catch block
e.printStackTrace();
}
return messages;
}
public static void iterateAttachments(Message[] messages){
System.out.println("Moving "+messages.length + " attachments");
for (int i=0; i<messages.length; i++){
System.out.println("moving attachment "+i+" of "+messages.length);
moveAttachments(messages[i], i);
}
}
public static void moveAttachments(Message message, int num){
try{
Address[] fromAddress = message.getFrom();
String from = fromAddress[0].toString();
String subject = message.getSubject();
String sentDate = message.getSentDate().toString();
String contentType = message.getContentType();
String messageContent = "";
// store attachment file name, separated by comma
String attachFiles = "";
if (contentType.contains("multipart")) {
// content may contain attachments
Multipart multiPart = (Multipart) message.getContent();
int numberOfParts = multiPart.getCount();
for (int partCount = 0; partCount < numberOfParts; partCount++) {
MimeBodyPart part = (MimeBodyPart) multiPart.getBodyPart(partCount);
if (Part.ATTACHMENT.equalsIgnoreCase(part.getDisposition())) {
// this part is attachment
String fileName = part.getFileName();
attachFiles += fileName + ", ";
part.saveFile("C:\\skyspark-2.1.8\\db\\demo\\io\\" + File.separator + fileName);
} else {
// this part may be the excel file
if(part.getContentType().equalsIgnoreCase("application/vnd.ms-excel")){
getPartDetails(part);
part.saveFile("C:\\skyspark-2.1.8\\db\\demo\\io\\" + File.separator + part.getFileName());
}
}
}
if (attachFiles.length() > 1) {
attachFiles = attachFiles.substring(0, attachFiles.length() - 2);
}
} else if (contentType.contains("text/plain")
|| contentType.contains("text/html")) {
Object content = message.getContent();
if (content != null) {
messageContent = content.toString();
}
}
System.out.println("\t From: " + from);
System.out.println("\t Subject: " + subject);
System.out.println("\t Sent Date: " + sentDate);
System.out.println("\t Message: " + messageContent);
System.out.println("\t Attachments: " + attachFiles);
}
catch(Exception e){
e.printStackTrace();
}
}
public static void getPartDetails(MimeBodyPart part){
try{
System.out.println("part content class: "+part.getContent().getClass().toString());
System.out.println("part content Type: "+part.getContentType());
System.out.println("part description: "+part.getDescription());
System.out.println("part disposition: "+part.getDisposition());
System.out.println("part encoding: "+part.getEncoding());
System.out.println("part file name: "+part.getFileName());
System.out.println("part line count: "+part.getLineCount());
System.out.println("part size: "+part.getSize());
Enumeration headers = part.getAllHeaders();
int i = 0;
while (headers.hasMoreElements()){
Header hdr = (Header) headers.nextElement();
System.out.println("header "+i+" name: "+hdr.getName());
System.out.println("header "+i+" value: "+hdr.getValue());
i++;
}
}
catch (Exception e){
e.printStackTrace();
}
}
}
Here is a sample of the output:
Moving 3975 attachments
moving attachment 0 of 3975
part content class: class com.sun.mail.util.BASE64DecoderStream
part content Type: application/vnd.ms-excel
part description: null
part disposition: inline
part encoding: base64
part file name: Temp 2015 Mar 23 0601.csv
part line count: -1
part size: 5342
header 0 name: Content-Type
header 0 value: application/vnd.ms-excel
header 1 name: Content-Transfer-Encoding
header 1 value: base64
header 2 name: Content-Disposition
header 2 value: inline; filename="Temp 2015 Mar 23 0601.csv"
java.io.IOException: A8 BAD Command Argument Error. 11
at com.sun.mail.imap.IMAPInputStream.fill(IMAPInputStream.java:154)
at com.sun.mail.imap.IMAPInputStream.read(IMAPInputStream.java:213)
at com.sun.mail.imap.IMAPInputStream.read(IMAPInputStream.java:239)
at com.sun.mail.util.BASE64DecoderStream.getByte(BASE64DecoderStream.java:358)
at com.sun.mail.util.BASE64DecoderStream.decode(BASE64DecoderStream.java:249)
at com.sun.mail.util.BASE64DecoderStream.read(BASE64DecoderStream.java:144)
at java.io.FilterInputStream.read(Unknown Source)
at javax.mail.internet.MimeBodyPart.saveFile(MimeBodyPart.java:825)
at javax.mail.internet.MimeBodyPart.saveFile(MimeBodyPart.java:851)
at EmailAttachmentMoverIMAP.moveAttachments(EmailAttachmentMoverIMAP.java:111)
moving attachment 1 of 3975
at EmailAttachmentMoverIMAP.iterateAttachments(EmailAttachmentMoverIMAP.java:79)
at EmailAttachmentMoverIMAP.main(EmailAttachmentMoverIMAP.java:29)
part content class: class com.sun.mail.util.BASE64DecoderStream
part content Type: application/vnd.ms-excel
part description: null
part disposition: inline
part encoding: base64
part file name: Temperature 2015 Mar 23 0601.csv
part line count: -1
part size: 5342
header 0 name: Content-Type
header 0 value: application/vnd.ms-excel
header 1 name: Content-Transfer-Encoding
header 1 value: base64
header 2 name: Content-Disposition
header 2 value: inline; filename="Temperature 2015 Mar 23 0601.csv"
java.io.IOException: A13 BAD Command Argument Error. 11
at com.sun.mail.imap.IMAPInputStream.fill(IMAPInputStream.java:154)
at com.sun.mail.imap.IMAPInputStream.read(IMAPInputStream.java:213)
at com.sun.mail.imap.IMAPInputStream.read(IMAPInputStream.java:239)
at com.sun.mail.util.BASE64DecoderStream.getByte(BASE64DecoderStream.java:358)
at com.sun.mail.util.BASE64DecoderStream.decode(BASE64DecoderStream.java:249)
at com.sun.mail.util.BASE64DecoderStream.read(BASE64DecoderStream.java:144)
at java.io.FilterInputStream.read(Unknown Source)
at javax.mail.internet.MimeBodyPart.saveFile(MimeBodyPart.java:825)
at javax.mail.internet.MimeBodyPart.saveFile(MimeBodyPart.java:851)
at EmailAttachmentMoverIMAP.moveAttachments(EmailAttachmentMoverIMAP.java:111)
at EmailAttachmentMoverIMAP.iterateAttachments(EmailAttachmentMoverIMAP.java:79)
at EmailAttachmentMoverIMAP.main(EmailAttachmentMoverIMAP.java:29)moving attachment 2 of 3975
As you can see I am finding the attachment (it has a filename and a filesize), but I cannot write to an output stream (I've tried to get the input stream and write to an output stream in every way possible and receive the same issue).
Edit: protocol debugging enabled:
DEBUG: JavaMail version 1.4.7
DEBUG: successfully loaded resource: /META-INF/javamail.default.providers
DEBUG: Tables of loaded providers
DEBUG: Providers Listed By Class Name: {com.sun.mail.smtp.SMTPSSLTransport=javax.mail.Provider[TRANSPORT,smtps,com.sun.mail.smtp.SMTPSSLTransport,Oracle], com.sun.mail.smtp.SMTPTransport=javax.mail.Provider[TRANSPORT,smtp,com.sun.mail.smtp.SMTPTransport,Oracle], com.sun.mail.imap.IMAPSSLStore=javax.mail.Provider[STORE,imaps,com.sun.mail.imap.IMAPSSLStore,Oracle], com.sun.mail.pop3.POP3SSLStore=javax.mail.Provider[STORE,pop3s,com.sun.mail.pop3.POP3SSLStore,Oracle], com.sun.mail.imap.IMAPStore=javax.mail.Provider[STORE,imap,com.sun.mail.imap.IMAPStore,Oracle], com.sun.mail.pop3.POP3Store=javax.mail.Provider[STORE,pop3,com.sun.mail.pop3.POP3Store,Oracle]}
DEBUG: Providers Listed By Protocol: {imaps=javax.mail.Provider[STORE,imaps,com.sun.mail.imap.IMAPSSLStore,Oracle], imap=javax.mail.Provider[STORE,imap,com.sun.mail.imap.IMAPStore,Oracle], smtps=javax.mail.Provider[TRANSPORT,smtps,com.sun.mail.smtp.SMTPSSLTransport,Oracle], pop3=javax.mail.Provider[STORE,pop3,com.sun.mail.pop3.POP3Store,Oracle], pop3s=javax.mail.Provider[STORE,pop3s,com.sun.mail.pop3.POP3SSLStore,Oracle], smtp=javax.mail.Provider[TRANSPORT,smtp,com.sun.mail.smtp.SMTPTransport,Oracle]}
DEBUG: successfully loaded resource: /META-INF/javamail.default.address.map
DEBUG: getProvider() returning javax.mail.Provider[STORE,imap,com.sun.mail.imap.IMAPStore,Oracle]
DEBUG IMAP: mail.imap.fetchsize: 16384
DEBUG IMAP: mail.imap.ignorebodystructuresize: false
DEBUG IMAP: mail.imap.statuscachetimeout: 1000
DEBUG IMAP: mail.imap.appendbuffersize: -1
DEBUG IMAP: mail.imap.minidletime: 10
DEBUG IMAP: protocolConnect returning false, host=ACTUALHOSTADDRESS, user=ACTUALUSER, password=<null>
DEBUG IMAP: trying to connect to host "ACTUALHOSTADDRESS", port 143, isSSL false
* OK The Microsoft Exchange IMAP4 service is ready.
A0 CAPABILITY
* CAPABILITY IMAP4 IMAP4rev1 AUTH=NTLM AUTH=GSSAPI AUTH=PLAIN STARTTLS CHILDREN IDLE NAMESPACE LITERAL+
A0 OK CAPABILITY completed.
DEBUG IMAP: AUTH: NTLM
DEBUG IMAP: AUTH: GSSAPI
DEBUG IMAP: AUTH: PLAIN
DEBUG IMAP: protocolConnect login, host=ACTUALHOSTADDRESS, user=ACTUALUSERNAME, password=<non-null>
DEBUG IMAP: AUTHENTICATE PLAIN command trace suppressed
DEBUG IMAP: AUTHENTICATE PLAIN command result: A1 OK AUTHENTICATE completed.
A2 CAPABILITY
* CAPABILITY IMAP4 IMAP4rev1 AUTH=NTLM AUTH=GSSAPI AUTH=PLAIN STARTTLS CHILDREN IDLE NAMESPACE LITERAL+
A2 OK CAPABILITY completed.
DEBUG IMAP: AUTH: NTLM
DEBUG IMAP: AUTH: GSSAPI
DEBUG IMAP: AUTH: PLAIN
DEBUG IMAP: connection available -- size: 1
A3 SELECT INBOX
* 6184 EXISTS
* 254 RECENT
* FLAGS (\Seen \Answered \Flagged \Deleted \Draft $MDNSent)
* OK [PERMANENTFLAGS (\Seen \Answered \Flagged \Deleted \Draft $MDNSent)] Permanent flags
* OK [UNSEEN 5323] Is the first unseen message
* OK [UIDVALIDITY 141703] UIDVALIDITY value
* OK [UIDNEXT 6298] The next unique identifier value
A3 OK [READ-WRITE] SELECT completed.
A4 SEARCH UNSEEN ALL
* SEARCH 5323 5324 5325 5326 5327 5328 5329 5330 5331 5332 5333 5334 5335 5336 5337 5338 5339 5340 5341 5342 5343 5344 5345 5346 5347 5348 5349 5350 5351 5352 5353 5354 5355 5356 5358 5359 5360 5361 5362 5363 5364 5365 5366 5367 5368 5369 5370 5371 5372 5373 5374 5375 5376 5377 5378 5379 5380 5381 5382 5383 5384 5385 5386 5387 5388 5389 5390 5391 5392 5393 5394 5395 5396 5397 5398 5399 5400 5401 5402 5403 5404 5405 5406 5407 5408 5409 5410 5411 5412 5413 5414 5415 5416 5417 5418 5419 5420 5421 5422 5423 5424 5425 5426 5427 5428 5429 5430 5431 5432 5433 5434 5435 5436 5437 5438 5439 5440 5441 5442 5443 5444 5445 5446 5447 5448 5449 5450 5451 5452 5453 5454 5455 5456 5457 5458 5459 5460 5461 5462 5463 5464 5465 5466 5467 5468 5469 5470 5471 5472 5473 5474 5475 5476 5477 5478 5479 5480 5481 5482 5483 5484 5485 5486 5487 5488 5489 5490 5491 5492 5493 5494 5495 5496 5497 5498 5499 5500 5501 5502 5503 5504 5505 5506 5507 5508 5509 5510 5511 5512 5513 5514 5515 5516 5517 5518 5519 5520 5521 5522 5523 5524 5525 5526 5527 5528 5529 5530 5531 5532 5533 5534 5535 5536 5537 5538 5539 5540 5541 5542 5543 5544 5545 5546 5547 5548 5549 5550 5551 5552 5553 5554 5555 5556 5557 5558 5559 5560 5561 5562 5563 5564 5565 5566 5567 5568 5569 5570 5571 5572 5573 5574 5575 5576 5577 5578 5579 5580 5581 5582 5583 5584 5585 5586 5587 5588 5589 5590 5591 5592 5593 5594 5595 5596 5597 5598 5599 5600 5601 5602 5603 5604 5605 5606 5607 5608 5609 5610 5611 5612 5613 5614 5615 5616 5617 5618 5619 5620 5621 5622 5623 5624 5625 5626 5627 5628 5629 5630 5631 5632 5633 5634 5635 5636 5637 5638 5639 5640 5641 5642 5643 5644 5645 5646 5647 5648 5649 5650 5651 5652 5653 5654 5655 5656 5657 5658 5659 5660 5661 5662 5663 5664 5665 5666 5667 5668 5669 5670 5671 5672 5673 5674 5675 5676 5677 5678 5679 5680 5681 5682 5683 5684 5685 5686 5687 5688 5689 5690 5691 5692 5693 5694 5695 5696 5697 5698 5699 5700 5701 5702 5703 5704 5705 5706 5707 5708 5709 5710 5711 5712 5713 5714 5715 5716 5717 5718 5719 5720 5721 5722 5723 5724 5725 5726 5727 5728 5729 5730 5731 5732 5733 5734 5735 5736 5737 5738 5739 5740 5741 5742 5743 5744 5745 5746 5747 5748 5749 5750 5751 5752 5753 5754 5755 5756 5757 5758 5759 5760 5761 5762 5763 5764 5765 5766 5767 5768 5769 5770 5771 5772 5773 5774 5775 5776 5777 5778 5779 5780 5781 5782 5783 5784 5785 5786 5787 5788 5789 5790 5791 5792 5793 5794 5795 5796 5797 5798 5799 5800 5801 5802 5803 5804 5805 5806 5807 5808 5809 5810 5811 5812 5813 5814 5815 5816 5817 5818 5819 5820 5821 5822 5823 5824 5825 5826 5827 5828 5829 5830 5831 5832 5833 5834 5835 5836 5837 5838 5839 5840 5841 5842 5843 5844 5845 5846 5847 5848 5849 5850 5851 5852 5853 5854 5855 5856 5857 5858 5859 5860 5861 5862 5863 5864 5865 5866 5867 5868 5869 5870 5871 5872 5873 5874 5875 5876 5877 5878 5879 5880 5881 5882 5883 5884 5885 5886 5887 5888 5889 5890 5891 5892 5893 5894 5895 5896 5897 5898 5899 5900 5901 5902 5903 5904 5905 5906 5907 5908 5909 5910 5911 5912 5913 5914 5915 5916 5917 5918 5919 5920 5921 5922 5923 5924 5925 5926 5927 5928 5929 5930 5931 5932 5933 5934 5935 5936 5937 5938 5939 5940 5941 5942 5943 5944 5945 5946 5947 5948 5949 5950 5951 5952 5953 5954 5955 5956 5957 5958 5959 5960 5961 5962 5963 5964 5965 5966 5967 5968 5969 5970 5971 5972 5973 5974 5975 5976 5977 5978 5979 5980 5981 5982 5983 5984 5985 5986 5987 5988 5989 5990 5991 5992 5993 5994 5995 5996 5997 5998 5999 6000 6001 6002 6003 6004 6005 6006 6007 6008 6009 6010 6011 6012 6013 6014 6015 6016 6017 6018 6019 6020 6021 6022 6023 6024 6025 6026 6027 6028 6029 6030 6031 6032 6033 6034 6035 6036 6037 6038 6039 6040 6041 6042 6043 6044 6045 6046 6047 6048 6049 6050 6051 6052 6053 6054 6055 6056 6057 6058 6059 6060 6061 6062 6063 6064 6065 6066 6067 6068 6069 6070 6071 6072 6073 6074 6075 6076 6077 6078 6079 6080 6081 6082 6083 6084 6085 6086 6087 6088 6089 6090 6091 6092 6093 6094 6095 6096 6097 6098 6099 6100 6101 6102 6103 6104 6105 6106 6107 6108 6109 6110 6111 6112 6113 6114 6115 6116 6117 6118 6119 6120 6121 6122 6123 6124 6125 6126 6127 6128 6129 6130 6131 6132 6133 6134 6135 6136 6137 6138 6139 6140 6141 6142 6144 6145 6146 6147 6148 6149 6150 6151 6152 6153 6154 6155 6156 6157 6158 6159 6160 6161 6162 6163 6164 6165 6166 6167 6168 6169 6170 6171 6172 6173 6174 6175 6176 6177 6178 6179 6180 6181 6182 6183 6184
A4 OK SEARCH completed.
Moving 860 attachments
moving attachment 0 of 860
A5 FETCH 5323 (ENVELOPE INTERNALDATE RFC822.SIZE)
* 5323 FETCH (UID 5430 ENVELOPE ("Thu, 30 Apr 2015 10:44:59 -0500" "=?utf-8?B?TWljcm9zb2Z0IE91dGxvb2sgVGVzdCBNZXNzYWdl?=" (("Microsoft Outlook" NIL "USER" "HOST")) NIL NIL (("=?utf-8?B?TktDc2QgU3Bhcms=?=" NIL "nkcsdspark" "HOST")) NIL NIL NIL "<2ac2507b-1f41-4a49-8835-cf396ffefced#HOST.com>") INTERNALDATE "30-Apr-2015 10:44:59 -0500" RFC822.SIZE 839)
A5 OK FETCH completed.
A6 FETCH 5323 (BODYSTRUCTURE)
* 5323 FETCH (UID 5430 BODYSTRUCTURE ("text" "html" ("charset" "utf-8") NIL NIL "8bit" 181 2 NIL NIL NIL NIL))
A6 OK FETCH completed.
A7 FETCH 5323 (BODY[TEXT]<0.181>)
A7 BAD Command Argument Error. 11
DEBUG IMAP: IMAPProtocol noop
A8 NOOP
A8 OK NOOP completed.
moving attachment 1 of 860
A9 FETCH 5324 (ENVELOPE INTERNALDATE RFC822.SIZE)
java.io.IOException: A7 BAD Command Argument Error. 11
at com.sun.mail.imap.IMAPInputStream.fill(IMAPInputStream.java:154)
at com.sun.mail.imap.IMAPInputStream.read(IMAPInputStream.java:213)
at sun.nio.cs.StreamDecoder.readBytes(Unknown Source)
at sun.nio.cs.StreamDecoder.implRead(Unknown Source)
at sun.nio.cs.StreamDecoder.read(Unknown Source)
at java.io.InputStreamReader.read(Unknown Source)
at com.sun.mail.handlers.text_plain.getContent(text_plain.java:125)
at javax.activation.DataSourceDataContentHandler.getContent(Unknown Source)
at javax.activation.DataHandler.getContent(Unknown Source)
at javax.mail.internet.MimeMessage.getContent(MimeMessage.java:1420)
at EmailAttachmentMoverCSCIMAP.moveAttachments(EmailAttachmentMoverCSCIMAP.java:118)
at EmailAttachmentMoverCSCIMAP.iterateAttachments(EmailAttachmentMoverCSCIMAP.java:75)
at EmailAttachmentMoverCSCIMAP.main(EmailAttachmentMoverCSCIMAP.java:23)

Categories

Resources