what in kafka DefaultRecord value field - java

Recently, I review the kafka code and test. I found a strange case:
I print the bytebuffer on the entry of SocketServer processCompletedReceives, as well as print the value on the point of Log sotre as follows:
the entry of SocketServer
private def processCompletedReceives() {
selector.completedReceives.asScala.foreach { receive =>
try {
openOrClosingChannel(receive.source) match {
case Some(channel) =>
val header = RequestHeader.parse(receive.payload)
val connectionId = receive.source
val context = new RequestContext(header, connectionId, channel.socketAddress,
channel.principal, listenerName, securityProtocol)
val req = new RequestChannel.Request(processor = id, context = context,
startTimeNanos = time.nanoseconds, memoryPool, receive.payload, requestChannel.metrics)
if(header.apiKey() == ApiKeys.PRODUCE){
LogHelper.log("produce request: %v" + java.util.Arrays.toString(receive.payload.array()))
}
...
the point of Log
validRecords.records().asScala.foreach { record =>
LogHelper.log("buffer info: value " + java.util.Arrays.toString(record.value().array()))
}
but, the result of print is different. and record.value() is not what I passed in client value like this:
public void run() {
int messageNo = 1;
while (true) {
String messageStr = "Message_" + messageNo;
long startTime = System.currentTimeMillis();
if (isAsync) { // Send asynchronously
producer.send(new ProducerRecord<>(topic,
messageNo,
messageStr), new DemoCallBack(startTime, messageNo, messageStr));
} else { // Send synchronously
try {
producer.send(new ProducerRecord<>(topic,
messageNo,
messageStr)).get();
System.out.println("Sent message: (" + messageNo + ", " + messageStr + ")");
} catch (InterruptedException | ExecutionException e) {
e.printStackTrace();
}
}
++messageNo;
}
}
the print result is not the not String messageStr = "Message_" + messageNo;
so what happend in the case.

done. I write the code as follows:
public class KVExtractor {
private static final Logger logger = LoggerFactory.getLogger(KVExtractor.class);
public static Map.Entry<byte[], byte[]> extract(Record record) {
if (record.hasKey() && record.hasValue()) {
byte[] key = new byte[record.key().limit()];
record.key().get(key);
byte[] value = new byte[record.value().limit()];
record.value().get(value);
System.out.println("key : " + new String(key) + " value: " + new String(value));
return new AbstractMap.SimpleEntry<byte[], byte[]>(key, value);
}else if(record.hasValue()){
// illegal impl
byte[] data = new byte[record.value().limit()];
record.value().get(data);
System.out.println("no key but with value : " + new String(data));
}
return null;
}
}

Related

Reading xml messages from ibm mq

I am trying to retrieve some XML messages as a readable string straight from IBM MQ, AND render them on a UI.
I used the following code and I get the error. GET Exception: com.IBM.mq.MQException: MQJE001: Completion Code '1', Reason '2110'.
What changes could I possibly make to get the messages back as a string? However, when the messages are of the format MQSTR they are rendered on the UI.
MQQueueManager _queueManager = null;
int port = inputPort;
String hostname = host;
String channel = chanel;
String qManager = queuemanager;
String inputQName = queuename;
MQEnvironment.hostname = hostname;
MQEnvironment.channel = channel;
MQEnvironment.port = port;
_queueManager = new MQQueueManager(qManager);
int openOptions = MQC.MQOO_INQUIRE + MQC.MQOO_FAIL_IF_QUIESCING + MQC.MQOO_BROWSE;
MQQueue queue = _queueManager.accessQueue( inputQName,
openOptions,
null, // default q manager
null, // no dynamic q name
null ); // no alternate user id
System.out.println("MQRead is now connected.\n");
int depth = queue.getCurrentDepth();
System.out.println("Current depth: " + depth + "\n");
if (depth == 0)
{
System.out.println("Depth is zero");
}
MQGetMessageOptions getOptions = new MQGetMessageOptions();
getOptions.options = MQC.MQGMO_NO_WAIT + MQC.MQGMO_FAIL_IF_QUIESCING + MQC.MQGMO_CONVERT + MQC.MQGMO_BROWSE_NEXT;
ArrayList<MessageDTO> myMessages = new ArrayList<>();
while(true)
{
MQMessage message = new MQMessage();
try
{
queue.get(message, getOptions);
byte[] b = new byte[message.getMessageLength()];
message.readFully(b);
System.out.println (new MQHeaderList (message, false));
String newMessage = new String(b);
MessageDTO newMsg = new MessageDTO();
newMsg.setMessage(newMessage);
Random rand = new Random();
newMsg.setMessageNumber(rand.nextInt());
myMessages.add(newMsg);
model.addAttribute("myMessages", myMessages);
System.out.println(myMessages);
message.clearMessage();
}
catch (IOException e)
{
System.out.println("IOException during GET: " + e.getMessage());
break;
}
catch (MQException e)
{
if (e.completionCode == 2 && e.reasonCode == MQException.MQRC_NO_MSG_AVAILABLE) {
if (depth > 0)
{
System.out.println("All messages read.");
}
}
else
{
System.out.println("GET Exception: " + e);
}
break;
} catch (MQDataException e) {
e.printStackTrace();
}
}
queue.close();
_queueManager.disconnect();
}
}```
I hope this helps someone one day, but the issue was caused by a format error, I ended up using the RFHUtil and changed the message format to to MQSTR and I got my messages back as desired.

Internal Server Error Exception from Facebook Graph API

I am trying to fetch campaign,ad level information from Facebook Using Graph API.
Campaign level information is getting with zero error for all accounts.
But for the Ad Level info, I am not getting the entire data. The API returns only 50. Remaining ads are getting with an exception called "Internal Server Error".
I've been getting this problem all day for one particular account. Every other Account works fine.
Below is the HTTP response:
response = (okhttp3.Response) Response{protocol=http/1.1, code=500, message=Internal Server Error,
url=https://graph.facebook.com/v3.3/act_fbAccountId/ads?access_token=myAccessToken&fields=%5B%22account_id%22%2C%22campaign_id%22%2C%22adset_id%22%2C%22id%22%2C%22bid_amount%22%2C%22name%22%2C%22status%22%2C%22preview_shareable_link%22%5D&limit=25&after=QVFIUkFzZAVdONXEwMXc5NjlOWURwczRQN0V3QW12NndRa0I4ZAUxsSjNkZA0FTRHR1U1dnaTQ5MTh2QnJ3bzNwWkNPd21jbC1tbjRmZAF81WVBsc0txaERCMHVn}
Below is the code
public List facebookAdsSynchThroughJson(String accessTokens, Long advertAccId, Properties googleStructureProperties, byte isOnScheduler, String appSecret) {
this.access_tokens = accessTokens;
this.appSecret = appSecret;
APIContext context = new APIContext(accessTokens, appSecret).enableDebug(true);
List<FacebookAdsStructure> adsList = new ArrayList<>();
String ads = "";
JSONObject advertIdObj = null;
JSONArray dataList = null;
ObjectMapper mapper = null;
boolean cond = true;
boolean success = false;
int limitValue = 100;
try {
while (cond) {
try {
AdAccount adAccount = new AdAccount(advertAccId, context).get().execute();
List adsFields = new ArrayList();
adsFields.add("account_id");
adsFields.add("campaign_id");
adsFields.add("adset_id");
adsFields.add("id");
adsFields.add("bid_amount");
adsFields.add("name");
adsFields.add("status");
adsFields.add("preview_shareable_link");
Map<String, Object> adsmap = new HashMap<>();
adsmap.put("fields", adsFields);
adsmap.put("limit", limitValue);
ads = adAccount.getAds().setParams(adsmap).execute().getRawResponseAsJsonObject().toString();
} catch (APIException ex) {
LOGGER.info("Exception thrown when fetching ads for Account Id: " + advertAccId + " with limit value: " + limitValue);
LOGGER.error(ex);
if (limitValue > 6) {
cond = true;
limitValue = limitValue / 2;
} else {
cond = false;
success = false;
break;
}
}
if (!ads.isEmpty()) {
cond = false;
success = true;
}
}
if (success) {
advertIdObj = new JSONObject(ads);
dataList = advertIdObj.getJSONArray("data");
mapper = new ObjectMapper();
adsList = mapper.readValue(dataList.toString(), new TypeReference<List<FacebookAdsStructure>>() {
});
JsonObject obj;
JsonParser parser = new JsonParser();
JsonElement result = parser.parse(ads);
obj = result.getAsJsonObject();
int maxTries = 1;
if (obj.has("data") && maxTries <= 5) {
while (obj.has("paging") && maxTries <= 5) {
JsonObject paging = obj.get("paging").getAsJsonObject();
if (paging.has("cursors")) {
JsonObject cursors = paging.get("cursors").getAsJsonObject();
String before = cursors.has("before") ? cursors.get("before").getAsString() : null;
String after = cursors.has("after") ? cursors.get("after").getAsString() : null;
}
String previous = paging.has("previous") ? paging.get("previous").getAsString() : null;
String next = paging.has("next") ? paging.get("next").getAsString() : "empty";
if (!next.equalsIgnoreCase("empty")) {
OkHttpClient client = new OkHttpClient.Builder()
.connectTimeout(5, TimeUnit.MINUTES)
.writeTimeout(5, TimeUnit.MINUTES)
.readTimeout(5, TimeUnit.MINUTES)
.build();
Request request1 = new Request.Builder().url(next).get().build();
Response response = client.newCall(request1).execute();
if (response.isSuccessful()) {
maxTries = 1;
ads = response.body().string();
advertIdObj = new JSONObject(ads);
dataList = advertIdObj.getJSONArray("data");
mapper = new ObjectMapper();
List<FacebookAdsStructure> adsLists = new ArrayList<>();
adsLists = mapper.readValue(dataList.toString(), new TypeReference<List<FacebookAdsStructure>>() {
});
adsList.addAll(adsLists);
LOGGER.info("Fetched the Ad Structure.." + adsList.size());
parser = new JsonParser();
result = parser.parse(ads);
obj = result.getAsJsonObject();
} else {
LOGGER.info("Exception in response when fetching ads for Account Id: " + advertAccId + " , error response message: " + response.message() + " and with limit value: " + limitValue);
maxTries++;
if (maxTries >= 6) {
break;
}
try {
LOGGER.info("Thread Sleeping For 2 Minutes - Started for fb se_account_id " + advertAccId + " and maxTries = " + maxTries);
Thread.sleep(120000);
LOGGER.info("Thread Sleeping For 2 Minutes - Completed for fb se_account_id " + advertAccId + " and maxTries = " + maxTries);
} catch (InterruptedException ex) {
Logger.getLogger(FacebookAccountStructureSyncThroughJson.class.getName()).log(Level.SEVERE, null, ex);
}
}
} else {
obj = new JsonObject();
}
}
}
}
} catch (JsonSyntaxException | IOException | JSONException ex) {
LOGGER.error(ex);
}
return adsList;
}
If anybody knows anything about this issue, please help me.

100 records at a time to udf

I have to pass record to an UDF which calls an API but as we want to do it parallely,we are using spark and thats why UDF is being developed, the problem here is that that UDF needs to take only 100 records at a time not more than that, it can't handle more than 100 records parallely, so how to ensure that only 100 record pass to it in one go please note we don't want to use count() function on whole record.
I am attaching the UDF code here,it's a generic UDF which returns array of struct.moreover if we pass 100 records in batchsize variable each time then,if suppose there are 198 records then if as we dont want to use count() we will not be knowing that its last batchsize is going to be 98.so how to handle that thing.
Guys... I have a generic UDF in which call is made for an API but before calling it creates batch of 100 firstly then only call restapi.. So the argument UDF takes are x1:string, x2:string, batchsize:integer(currently the batchsize is 100)..so in UDF until and unless the batchsize is not 100 the call will not happen.. And for each record it will return null.
So till 99th record it will return. Null but at 100th record the call will happen
[So, now the problem part:as we are taking batchsize 100 and call will take place only at 100th record. So, in condition like if we have suppose 198 record in file then 100 record will get the output but, other 98 will only return null as they will not get processed..
So please help a way around, and UDF take one record at a time, but it keep on collecting till 100th record.. I hope this clears up
public class Standardize_Address extends GenericUDF {
private static final Logger logger = LoggerFactory.getLogger(Standardize_Address.class);
private int counter = 0;
Client client = null;
private Batch batch = new Batch();
public Standardize_Address() {
client = new ClientBuilder().withUrl("https://ss-staging-public.beringmedia.com/street-address").build();
}
// StringObjectInspector streeti;
PrimitiveObjectInspector streeti;
PrimitiveObjectInspector cityi;
PrimitiveObjectInspector zipi;
PrimitiveObjectInspector statei;
PrimitiveObjectInspector batchsizei;
private ArrayList ret;
#Override
public String getDisplayString(String[] argument) {
return "My display string";
}
#Override
public ObjectInspector initialize(ObjectInspector[] args) throws UDFArgumentException {
System.out.println("under initialize");
if (args[0] == null) {
throw new UDFArgumentTypeException(0, "NO Street is mentioned");
}
if (args[1] == null) {
throw new UDFArgumentTypeException(0, "No Zip is mentioned");
}
if (args[2] == null) {
throw new UDFArgumentTypeException(0, "No city is mentioned");
}
if (args[3] == null) {
throw new UDFArgumentTypeException(0, "No State is mentioned");
}
if (args[4] == null) {
throw new UDFArgumentTypeException(0, "No batch size is mentioned");
}
/// streeti =args[0];
streeti = (PrimitiveObjectInspector)args[0];
// this.streetvalue = (StringObjectInspector) streeti;
cityi = (PrimitiveObjectInspector)args[1];
zipi = (PrimitiveObjectInspector)args[2];
statei = (PrimitiveObjectInspector)args[3];
batchsizei = (PrimitiveObjectInspector)args[4];
ret = new ArrayList();
ArrayList structFieldNames = new ArrayList();
ArrayList structFieldObjectInspectors = new ArrayList();
structFieldNames.add("Street");
structFieldNames.add("city");
structFieldNames.add("zip");
structFieldNames.add("state");
structFieldObjectInspectors.add(PrimitiveObjectInspectorFactory.writableStringObjectInspector);
structFieldObjectInspectors.add(PrimitiveObjectInspectorFactory.writableStringObjectInspector);
structFieldObjectInspectors.add(PrimitiveObjectInspectorFactory.writableStringObjectInspector);
structFieldObjectInspectors.add(PrimitiveObjectInspectorFactory.writableStringObjectInspector);
StructObjectInspector si2 = ObjectInspectorFactory.getStandardStructObjectInspector(structFieldNames,
structFieldObjectInspectors);
ListObjectInspector li2;
li2 = ObjectInspectorFactory.getStandardListObjectInspector(si2);
return li2;
}
#Override
public Object evaluate(DeferredObject[] args) throws HiveException {
ret.clear();
System.out.println("under evaluate");
// String street1 = streetvalue.getPrimitiveJavaObject(args[0].get());
Object oin = args[4].get();
System.out.println("under typecasting");
int batchsize = (Integer) batchsizei.getPrimitiveJavaObject(oin);
System.out.println("batchsize");
Object oin1 = args[0].get();
String street1 = (String) streeti.getPrimitiveJavaObject(oin1);
Object oin2 = args[1].get();
String zip1 = (String) zipi.getPrimitiveJavaObject(oin2);
Object oin3 = args[2].get();
String city1 = (String) cityi.getPrimitiveJavaObject(oin3);
Object oin4 = args[3].get();
String state1 = (String) statei.getPrimitiveJavaObject(oin4);
logger.info("address passed, street=" + street1 + ",zip=" + zip1 + ",city=" + city1 + ",state=" + state1);
counter++;
try {
System.out.println("under try");
Lookup lookup = new Lookup();
lookup.setStreet(street1);
lookup.setCity(city1);
lookup.setState(state1);
lookup.setZipCode(zip1);
lookup.setMaxCandidates(1);
batch.add(lookup);
} catch (BatchFullException ex) {
logger.error(ex.getMessage(), ex);
} catch (Exception e) {
logger.error(e.getMessage(), e);
}
/* batch.add(lookup); */
if (counter == batchsize) {
System.out.println("under if");
try {
logger.info("batch input street " + batch.get(0).getStreet());
try {
client.send(batch);
} catch (Exception e) {
logger.error(e.getMessage(), e);
logger.warn("skipping current batch, continuing with the next batch");
batch.clear();
counter = 0;
return null;
}
Vector<Lookup> lookups = batch.getAllLookups();
for (int i = 0; i < batch.size(); i++) {
// ListObjectInspector candidates;
ArrayList<Candidate> candidates = lookups.get(i).getResult();
if (candidates.isEmpty()) {
logger.warn("Address " + i + " is invalid.\n");
continue;
}
logger.info("Address " + i + " is valid. (There is at least one candidate)");
for (Candidate candidate : candidates) {
final Components components = candidate.getComponents();
final Metadata metadata = candidate.getMetadata();
logger.info("\nCandidate " + candidate.getCandidateIndex() + ":");
logger.info("Delivery line 1: " + candidate.getDeliveryLine1());
logger.info("Last line: " + candidate.getLastLine());
logger.info("ZIP Code: " + components.getZipCode() + "-" + components.getPlus4Code());
logger.info("County: " + metadata.getCountyName());
logger.info("Latitude: " + metadata.getLatitude());
logger.info("Longitude: " + metadata.getLongitude());
}
Object[] e;
e = new Object[4];
e[0] = new Text(candidates.get(i).getComponents().getStreetName());
e[1] = new Text(candidates.get(i).getComponents().getCityName());
e[2] = new Text(candidates.get(i).getComponents().getZipCode());
e[3] = new Text(candidates.get(i).getComponents().getState());
ret.add(e);
}
counter = 0;
batch.clear();
} catch (Exception e) {
logger.error(e.getMessage(), e);
}
return ret;
} else {
return null;
}
}
}

ERROR org.omg.CORBA.MARSHAL Sequence length too large

After successfully fetching alarms from Corba U2000 server and now reading the values, I am getting the error below
ERROR: org.omg.CORBA.MARSHAL: Sequence length too large. Only 12 available and trying to assign 31926513 vmcid: 0x0 minor code: 0 completed: No
org.omg.CORBA.MARSHAL: Sequence length too large. Only 12 available and trying to assign 31926513 vmcid: 0x0 minor code: 0 completed: No
at org.omg.CosNotification.EventBatchHelper.read(EventBatchHelper.java:57)
at AlarmIRPConstDefs.AlarmInformationSeqHelper.read(AlarmInformationSeqHelper.java:51)
at AlarmIRPConstDefs.AlarmInformationSeqHelper.extract(AlarmInformationSeqHelper.java:26)
at com.be.u2k.Main.getAlarmsList(Main.java:144)
at com.be.u2k.Main.main(Main.java:109)
for method AlarmInformationSeqHelper.extract
// Get all active alarms list
private static void getAlarmsList(ORB orb, AlarmIRP alarmIRP) {
try {
ManagedGenericIRPConstDefs.StringTypeOpt filter = new ManagedGenericIRPConstDefs.StringTypeOpt();
filter.value("($type_name == 'x1')"); // Query new alarms and acknowledge or unacknowledge alarms
AlarmIRPConstDefs.DNTypeOpt base_object = new AlarmIRPConstDefs.DNTypeOpt();
BooleanHolder flag = new BooleanHolder();
AlarmIRPSystem.AlarmInformationIteratorHolder iter = new AlarmIRPSystem.AlarmInformationIteratorHolder();
StructuredEvent[] alarmList = alarmIRP.get_alarm_list(filter, base_object, flag, iter);
System.out.println("AlarmIRP get_alarm_list success, flag: " + flag.value + " fetched total: " + (alarmList == null? -1: alarmList.length));
for (StructuredEvent alarm: alarmList) {
if (alarm.header != null) {
System.out.println("fixed_header.event_type.name: " + alarm.header.fixed_header.event_type.type_name
+ " fixed_header.event_type.domain_name: " + alarm.header.fixed_header.event_type.domain_name);
if (alarm.header.variable_header != null) {
for (Property variableHeader: alarm.header.variable_header) {
System.out.println("variable_header.name: " + variableHeader.name + " alarm.header.variable_header.value: " + variableHeader.value);
}
}
}
if (alarm.filterable_data != null) {
for (Property filterableData: alarm.filterable_data) {
System.out.println("data.name: " + filterableData.name);
if (filterableData.value != null && filterableData.value.toString().contains("org.jacorb.orb.CDROutputStream")) {
StructuredEvent[] filterableDataValues = AlarmInformationSeqHelper.extract(filterableData.value);
} else {
System.out.println("data.value: " + filterableData.value);
}
}
}
}
} catch (ManagedGenericIRPSystem.InvalidParameter e) {
System.out.println("ERROR get_alarm_list InvalidParameter (Indicates that the parameter is invalid): " + e) ;
} catch (ManagedGenericIRPSystem.ParameterNotSupported e) {
System.out.println("ERROR get_alarm_list ParameterNotSupported (Indicates that the operation is not supported): " + e) ;
} catch (AlarmIRPSystem.GetAlarmList e) {
System.out.println("ERROR get_alarm_list ParameterNotSupported (Indicates exceptions caused by unknown reasons): " + e) ;
}
}
Or is my way of reading the alarms list incorrect? Thanks.
You can find the example method below for getAlarmList
//Connect to AlarmIRP
AlarmIRP alarmIRP = AlarmIRPHelper.narrow(orb.string_to_object(alarmIrpIOR.value));
StringTypeOpt alarmFilter = new StringTypeOpt();
alarmFilter.value("");
DNTypeOpt base_object = new DNTypeOpt();
base_object.value("");
BooleanHolder flag = new BooleanHolder(false); // false for iteration
AlarmInformationIteratorHolder iter = new AlarmInformationIteratorHolder();
List<String> alarmIds = get_alarm_list(alarmIRP, alarmFilter, base_object, flag, iter);
private List<String> get_alarm_list(org._3gppsa5_2.AlarmIRPSystem.AlarmIRP alarmIRP, org._3gppsa5_2.ManagedGenericIRPConstDefs.StringTypeOpt alarmFilter, org._3gppsa5_2.AlarmIRPConstDefs.DNTypeOpt base_object, BooleanHolder flag, org._3gppsa5_2.AlarmIRPSystem.AlarmInformationIteratorHolder iter) throws org._3gppsa5_2.AlarmIRPSystem.GetAlarmList, org._3gppsa5_2.ManagedGenericIRPSystem.ParameterNotSupported, org._3gppsa5_2.AlarmIRPSystem.NextAlarmInformations, org._3gppsa5_2.ManagedGenericIRPSystem.InvalidParameter, BAD_OPERATION {
logger.info("[get-alarm-list][start]");
alarmIRP.get_alarm_list(alarmFilter, base_object, flag, iter);
List<StructuredEvent> alarms = new ArrayList();
EventBatchHolder alarmInformation = new EventBatchHolder();
short alarmSize = 100;
List<String> alarmIds = new ArrayList();
while (iter.value.next_alarmInformations(alarmSize, alarmInformation)) {
alarms.addAll(Arrays.asList(alarmInformation.value));
logger.info("Current alarm size:" + alarms.size());
}
for (StructuredEvent event : alarms) {
try {
//printAlarm(event);
} catch (Exception ex) {
}
List<Property> rem = new ArrayList<Property>();
rem.addAll(Arrays.asList(PropertySeqHelper.extract(event.remainder_of_body)));
for (Property property : rem) {
if (!property.name.equals(org._3gppsa5_2.AlarmIRPNotifications.NotifyNewAlarm.ALARM_ID)) {
continue;
}
alarmIds.add(property.value.extract_string());
}
}
logger.info("[get-alarm-list][completed] size :" + alarms.size());
return alarmIds;
}
I managed to figure out what is that filterableData.value.toString() value that is "org.jacorb.orb.CDROutputStream". It turns out that the property with name "b" is a TimeBase:: UtcT according to the docs.
To convert it to correct value which is a utc timestamp, I changed the condition to
if (filterableData.name.equals("b") && filterableData.value != null && filterableData.value.toString().contains("org.jacorb.orb.CDROutputStream")) {
long occuranceTime = TimeTHelper.read(filterableData.value.create_input_stream());
System.out.println("data.value: " + occuranceTime);
}

Kafka Java consumer works only for localhost and fails for remote server

I've been working with Kafka for two months, and I used this code to consume messages locally. I recently decided to distribute Zookeeper and Kafka and everything seems to work just fine. My issue started when I tried to use the consumer's code from a remote IP; Once I change seeds.add("127.0.0.1"); to seeds.add("104.131.40.xxx"); I get this error message:
run:
Error communicating with Broker [104.131.40.xxx] to find Leader for [temperature, 0] Reason:
java.net.ConnectException: Connection refused Can't find metadata for Topic and Partition. Exiting
BUILD SUCCESSFUL (total time: 21 seconds)r code here
this is the code that I currently use:
/*
Kafka API consumer reads 10 readings from the "temperature" topic
*/
package simpleexample;
import kafka.api.FetchRequest;
import kafka.api.FetchRequestBuilder;
import kafka.api.PartitionOffsetRequestInfo;
import kafka.common.ErrorMapping;
import kafka.common.TopicAndPartition;
import kafka.javaapi.*;
import kafka.javaapi.consumer.SimpleConsumer;
import kafka.message.MessageAndOffset;
import java.nio.ByteBuffer;
import java.util.ArrayList;
import java.util.Collections;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
public class SimpleExample {
public static void main(String args[]) {
SimpleExample example = new SimpleExample();
//long maxReads = Long.parseLong(args[0]);
long maxReads = 10;
//String topic = args[1];
String topic = "temperature";
//int partition = Integer.parseInt(args[2]);
int partition =0;
List<String> seeds = new ArrayList<String>();
//seeds.add(args[3]);
seeds.add("104.131.40.xxx");
//int port = Integer.parseInt(args[4]);
int port =9092;
try {
example.run(maxReads, topic, partition, seeds, port);
} catch (Exception e) {
System.out.println("Oops:" + e);
e.printStackTrace();
}
}
private List<String> m_replicaBrokers = new ArrayList<String>();
public SimpleExample() {
m_replicaBrokers = new ArrayList<String>();
}
public void run(long a_maxReads, String a_topic, int a_partition, List<String> a_seedBrokers, int a_port) throws Exception {
// find the meta data about the topic and partition we are interested in
//
PartitionMetadata metadata = findLeader(a_seedBrokers, a_port, a_topic, a_partition);
if (metadata == null) {
System.out.println("Can't find metadata for Topic and Partition. Exiting");
return;
}
if (metadata.leader() == null) {
System.out.println("Can't find Leader for Topic and Partition. Exiting");
return;
}
String leadBroker = metadata.leader().host();
String clientName = "Client_" + a_topic + "_" + a_partition;
SimpleConsumer consumer = new SimpleConsumer(leadBroker, a_port, 100000, 64 * 1024, clientName);
long readOffset = getLastOffset(consumer,a_topic, a_partition, kafka.api.OffsetRequest.EarliestTime(), clientName);
int numErrors = 0;
while (a_maxReads > 0) {
if (consumer == null) {
consumer = new SimpleConsumer(leadBroker, a_port, 100000, 64 * 1024, clientName);
}
FetchRequest req = new FetchRequestBuilder()
.clientId(clientName)
.addFetch(a_topic, a_partition, readOffset, 100000) // Note: this fetchSize of 100000 might need to be increased if large batches are written to Kafka
.build();
FetchResponse fetchResponse = consumer.fetch(req);
if (fetchResponse.hasError()) {
numErrors++;
// Something went wrong!
short code = fetchResponse.errorCode(a_topic, a_partition);
System.out.println("Error fetching data from the Broker:" + leadBroker + " Reason: " + code);
if (numErrors > 5) break;
if (code == ErrorMapping.OffsetOutOfRangeCode()) {
// We asked for an invalid offset. For simple case ask for the last element to reset
readOffset = getLastOffset(consumer,a_topic, a_partition, kafka.api.OffsetRequest.LatestTime(), clientName);
continue;
}
consumer.close();
consumer = null;
leadBroker = findNewLeader(leadBroker, a_topic, a_partition, a_port);
continue;
}
numErrors = 0;
long numRead = 0;
for (MessageAndOffset messageAndOffset : fetchResponse.messageSet(a_topic, a_partition)) {
long currentOffset = messageAndOffset.offset();
if (currentOffset < readOffset) {
System.out.println("Found an old offset: " + currentOffset + " Expecting: " + readOffset);
continue;
}
readOffset = messageAndOffset.nextOffset();
ByteBuffer payload = messageAndOffset.message().payload();
byte[] bytes = new byte[payload.limit()];
payload.get(bytes);
System.out.println(String.valueOf(messageAndOffset.offset()) + ": " + new String(bytes, "UTF-8"));
numRead++;
a_maxReads--;
}
if (numRead == 0) {
try {
Thread.sleep(1000);
} catch (InterruptedException ie) {
}
}
}
if (consumer != null) consumer.close();
}
public static long getLastOffset(SimpleConsumer consumer, String topic, int partition,
long whichTime, String clientName) {
TopicAndPartition topicAndPartition = new TopicAndPartition(topic, partition);
Map<TopicAndPartition, PartitionOffsetRequestInfo> requestInfo = new HashMap<TopicAndPartition, PartitionOffsetRequestInfo>();
requestInfo.put(topicAndPartition, new PartitionOffsetRequestInfo(whichTime, 1));
kafka.javaapi.OffsetRequest request = new kafka.javaapi.OffsetRequest(
requestInfo, kafka.api.OffsetRequest.CurrentVersion(), clientName);
OffsetResponse response = consumer.getOffsetsBefore(request);
if (response.hasError()) {
System.out.println("Error fetching data Offset Data the Broker. Reason: " + response.errorCode(topic, partition) );
return 0;
}
long[] offsets = response.offsets(topic, partition);
return offsets[0];
}
private String findNewLeader(String a_oldLeader, String a_topic, int a_partition, int a_port) throws Exception {
for (int i = 0; i < 3; i++) {
boolean goToSleep = false;
PartitionMetadata metadata = findLeader(m_replicaBrokers, a_port, a_topic, a_partition);
if (metadata == null) {
goToSleep = true;
} else if (metadata.leader() == null) {
goToSleep = true;
} else if (a_oldLeader.equalsIgnoreCase(metadata.leader().host()) && i == 0) {
// first time through if the leader hasn't changed give ZooKeeper a second to recover
// second time, assume the broker did recover before failover, or it was a non-Broker issue
//
goToSleep = true;
} else {
return metadata.leader().host();
}
if (goToSleep) {
try {
Thread.sleep(1000);
} catch (InterruptedException ie) {
}
}
}
System.out.println("Unable to find new leader after Broker failure. Exiting");
throw new Exception("Unable to find new leader after Broker failure. Exiting");
}
private PartitionMetadata findLeader(List<String> a_seedBrokers, int a_port, String a_topic, int a_partition) {
PartitionMetadata returnMetaData = null;
loop:
for (String seed : a_seedBrokers) {
SimpleConsumer consumer = null;
try {
consumer = new SimpleConsumer(seed, a_port, 100000, 64 * 1024, "leaderLookup");
List<String> topics = Collections.singletonList(a_topic);
TopicMetadataRequest req = new TopicMetadataRequest(topics);
kafka.javaapi.TopicMetadataResponse resp = consumer.send(req);
List<TopicMetadata> metaData = resp.topicsMetadata();
for (TopicMetadata item : metaData) {
for (PartitionMetadata part : item.partitionsMetadata()) {
if (part.partitionId() == a_partition) {
returnMetaData = part;
break loop;
}
}
}
} catch (Exception e) {
System.out.println("Error communicating with Broker [" + seed + "] to find Leader for [" + a_topic
+ ", " + a_partition + "] Reason: " + e);
} finally {
if (consumer != null) consumer.close();
}
}
if (returnMetaData != null) {
m_replicaBrokers.clear();
for (kafka.cluster.Broker replica : returnMetaData.replicas()) {
m_replicaBrokers.add(replica.host());
}
}
return returnMetaData;
}
}
You need to set the advertised.host.name instead of host.name in the kafka server.properties configuration file.

Categories

Resources