am trying to use S3TransferManager to upload file to s3. but my unit test fails due to the below error,
java.util.concurrent.CompletionException: software.amazon.awssdk.services.s3.model.S3Exception: Invalid response status from request
here's my code,
public class AwsTransferService {
private final S3TransferManager s3TransferManager;
private final AwsS3Config s3Config;
public AwsTransferService(AwsS3Config s3Config, AwsConfig awsConfig) {
this.s3Config = s3Config;
AwsBasicCredentials awsCredentials = create(awsConfig.getAccessKey(), awsConfig.getSecretKey());
this.s3TransferManager = S3TransferManager.builder()
.s3ClientConfiguration(builder -> builder.credentialsProvider(create(awsCredentials))
.region(s3Config.getRegion())
.minimumPartSizeInBytes(10 * MB)
.targetThroughputInGbps(20.0))
.build();
}
public AwsTransferService(S3TransferManager s3TransferManager, AwsS3Config s3Config) {
this.s3TransferManager = s3TransferManager;
this.s3Config = s3Config;
}
public void transferObject(#NonNull String bucketName, #NonNull String transferKey, #NonNull File file) {
validateS3Key(transferKey);
validatePath(file.toPath());
log.info("Transfering s3 object from :{} to :{}", file.getPath(), transferKey);
try {
Upload upload =
s3TransferManager.upload(b -> b.putObjectRequest(r -> r.bucket(bucketName).key(transferKey))
.source(file.toPath()));
CompletedUpload completedUpload = upload.completionFuture().join();
log.info("PutObjectResponse: " + completedUpload.response());
} catch (Exception e) {
e.printStackTrace();
}
}
and here is my unit test for the above code,
#RegisterExtension
public static final S3MockExtension S3_MOCK = builder()
.silent()
.withSecureConnection(false)
.build();
private S3ClientConfiguration s3ClientConfiguration;
private AwsTransferService service;
private AwsS3Service awsS3Service;
private S3TransferManager s3TransferManager;
private static S3Client s3Client;
#BeforeAll
public static void beforeAll() {
s3Client = S3_MOCK.createS3ClientV2();
}
#BeforeEach
public void beforeEach() throws IOException {
s3ClientConfiguration =mock(S3ClientConfiguration.class);
s3TransferManager = S3TransferManager.builder().s3ClientConfiguration(s3ClientConfiguration).build();
AwsS3Config s3Config = AwsS3Config.builder()
.region(Region.AP_SOUTHEAST_2)
.s3BucketName(S3Factory.VALID_S3_BUCKET)
.build();
awsS3Service = new AwsS3Service(s3Config, s3Client);
awsS3Service.createBucket(VALID_S3_BUCKET);
service = new AwsTransferService(s3TransferManager, s3Config);
}
#Test
public void transferObject_singleFile_ShouldUploadFiletoS3() throws IOException {
String transferKey = TRANSFER_KEY_UPLOAD;
String fileName = FILE_PATH + TRANSFER_FILE_NAME;
writeFile(fileName);
File transferFile = new File(fileName);
service.transferObject(VALID_S3_BUCKET, transferKey + TRANSFER_FILE_NAME, transferFile);
int expectedObjectsSize = 1;
Log.initLoggingToFile(Log.LogLevel.Error, "log.txt");
List<S3Object> matchedObjects = awsS3Service.listObjectsWithPrefix(transferKey + TRANSFER_FILE_NAME);
assertEquals(expectedObjectsSize, matchedObjects.size());
assertEquals(transferKey + TRANSFER_FILE_NAME, matchedObjects.get(0).key());
}
please let me know why the unit test fails with the above mentioned error?
also please let me know is there any other way to mock "s3ClientConfiguration"? in aws java sdk v2
The Java SDK docs don't cover launching a spot instance into a VPC with a Public IP: https://docs.aws.amazon.com/sdk-for-java/v1/developer-guide/tutorial-spot-adv-java.html.
How to do that?
Here's a SSSCE using aws-java-sdk-ec2-1.11.487:
import java.util.ArrayList;
import java.util.Collections;
import java.util.Comparator;
import java.util.List;
import com.amazonaws.services.ec2.AmazonEC2;
import com.amazonaws.services.ec2.AmazonEC2ClientBuilder;
import com.amazonaws.services.ec2.model.CreateTagsRequest;
import com.amazonaws.services.ec2.model.InstanceNetworkInterfaceSpecification;
import com.amazonaws.services.ec2.model.InstanceType;
import com.amazonaws.services.ec2.model.LaunchSpecification;
import com.amazonaws.services.ec2.model.RequestSpotInstancesRequest;
import com.amazonaws.services.ec2.model.SpotInstanceRequest;
import com.amazonaws.services.ec2.model.Tag;
public class SpotLauncher
{
private static final int kInstances = 25;
private static final String kMaxPrice = "0.007";
private static final InstanceType kInstanceType = InstanceType.M3Medium;
private static final String kSubnet = "subnet-xxxx";
private static final String kAmi = "ami-xxxx";
private static final String kName = "spot";
private static final String kSecurityGroup2 = "sg-xxxx";
private static final String kSecurityGroup1 = "sg-yyyy";
public static void main(String[] args)
{
AmazonEC2 ec2 = AmazonEC2ClientBuilder.defaultClient();
RequestSpotInstancesRequest request = new RequestSpotInstancesRequest();
request.setSpotPrice(kMaxPrice); // max price we're willing to pay
request.setInstanceCount(kInstances);
LaunchSpecification launchSpecification = new LaunchSpecification();
launchSpecification.setImageId(kAmi);
launchSpecification.setInstanceType(kInstanceType);
launchSpecification.setKeyName("aws");
// security group IDs - don't add them, they're already added to the network spec
// launchSpecification.withAllSecurityGroups(new GroupIdentifier().withGroupId("sg-xxxx"), new GroupIdentifier().withGroupId("sg-yyyy"));
List<String> securityGroups = new ArrayList<String>();
securityGroups.add(kSecurityGroup1);
securityGroups.add(kSecurityGroup2);
InstanceNetworkInterfaceSpecification networkSpec = new InstanceNetworkInterfaceSpecification();
networkSpec.setDeviceIndex(0);
networkSpec.setSubnetId(kSubnet);
networkSpec.setGroups(securityGroups);
networkSpec.setAssociatePublicIpAddress(true);
List<InstanceNetworkInterfaceSpecification> nicWrapper = new ArrayList<InstanceNetworkInterfaceSpecification>();
nicWrapper.add(networkSpec);
// launchSpecification.setSubnetId("subnet-ccde4ce1"); // don't add this, it's already added to the network interface spec
launchSpecification.setNetworkInterfaces(nicWrapper);
// add the launch specifications to the request
request.setLaunchSpecification(launchSpecification);
// call the RequestSpotInstance API
ec2.requestSpotInstances(request);
while (!SetEc2Names(ec2))
{
Sleep(2000);
}
System.out.println("\nDONE.");
}
private static void Sleep(long aMillis)
{
try
{
Thread.sleep(aMillis);
}
catch (InterruptedException aEx)
{
aEx.printStackTrace();
}
}
private static boolean SetEc2Names(AmazonEC2 aEc2Client)
{
List<SpotInstanceRequest> requests = aEc2Client.describeSpotInstanceRequests().getSpotInstanceRequests();
Collections.sort(requests, GetCreatedDescComparator());
for (int i = 0; i < kInstances; i++)
{
SpotInstanceRequest request = requests.get(i);
if (request.getLaunchSpecification().getImageId().equals(kAmi))
{
System.out.println("request: " + request);
String instanceId = request.getInstanceId();
if (instanceId == null)
{
System.out.println("instance not launched yet, we don't have an id");
return false;
}
System.out.println("setting name for newly launched spot instance, id: " + instanceId);
AssignName(aEc2Client, request);
}
}
return true;
}
private static void AssignName(AmazonEC2 aEc2Client, SpotInstanceRequest aRequest)
{
String instanceId = aRequest.getInstanceId();
Tag tag = new Tag("Name", kName);
CreateTagsRequest tagRequest = new CreateTagsRequest();
List<String> instanceIds = new ArrayList<String>();
instanceIds.add(instanceId);
tagRequest.withResources(instanceIds);
List<Tag> tags = new ArrayList<Tag>();
tags.add(tag);
tagRequest.setTags(tags);
aEc2Client.createTags(tagRequest);
}
private static Comparator<SpotInstanceRequest> GetCreatedDescComparator()
{
return new Comparator<SpotInstanceRequest>()
{
#Override
public int compare(SpotInstanceRequest o1, SpotInstanceRequest o2)
{
return -1 * o1.getCreateTime().compareTo(o2.getCreateTime());
}
};
}
}
I'm trying to create a Java version of jtimon Github
On Github you can see the proto message for defining the service
And my Java code looks like following
package jti.collector;
public class JTICollector {
private static final Logger log = LoggerFactory.getLogger(JTICollector.class);
private static final String DEVICE_ADDRESS = "100.96.244.41";
private static final int GRPC_PORT = 50051;
private static final int SAMPLE_FRQ = 2000;
private static final long SLEEP_TIME = 10000;
private static final long SLEEP_TIME_FOR_REQ = 10;
public static void main(final String[] args) {
final String username = "AAA";
final String password = "BBB";
log.info("UserName : " + username);
final ManagedChannel channel = ManagedChannelBuilder
.forAddress(DEVICE_ADDRESS, GRPC_PORT)
.usePlaintext(true)
.build();
LoginGrpc.LoginBlockingStub loginStub = LoginGrpc.newBlockingStub(channel);
Authentication.LoginRequest loginRequest = Authentication.LoginRequest.newBuilder()
.setClientId("foo-bar")
.setUserName(username)
.setPassword(password)
.build();
Authentication.LoginReply loginReply = loginStub.withDeadlineAfter(SLEEP_TIME_FOR_REQ, TimeUnit.SECONDS)
.loginCheck(loginRequest);
log.info("LoginReply : " + loginReply.toString());
OpenConfigTelemetryGrpc.OpenConfigTelemetryStub stub = OpenConfigTelemetryGrpc.newStub(channel);
Telemetry.SubscriptionRequest request = Telemetry.SubscriptionRequest.newBuilder()
.addPathList(Telemetry.Path.newBuilder()
.setPath("/interfaces")
.setSampleFrequency(SAMPLE_FRQ) //In mills
.build())
.build();
final CountDownLatch finishLatch = new CountDownLatch(1);
StreamObserver<Telemetry.OpenConfigData> responseObserver = new StreamObserver<Telemetry.OpenConfigData>() {
#Override
public void onNext(final Telemetry.OpenConfigData value) {
log.info("Received Value : " + value.toString());
}
#Override
public void onError(final Throwable t) {
log.warn("onError", t);
finishLatch.countDown();
}
#Override
public void onCompleted() {
log.info("onCompleted");
finishLatch.countDown();
}
};
stub.telemetrySubscribe(request, responseObserver);
log.info("Blocking on latch");
try {
finishLatch.await();
} catch (InterruptedException e) {
log.warn("Finnish Latch Failed", e);
}
}
}
Login get success. But onNext() never get called. And the onError() trigger with a message including some of the content which im looking for
io.grpc.StatusRuntimeException: UNAVAILABLE: {"created":"#1511903558.423607783","description":"EOF","file":"../../../../../../../../src/dist/grpc/src/core/lib/iomgr/tcp_posix.c","file_line":235,"grpc_status":14}i-safi-nameRIPV4_UNICAST:B
>afi-safis/afi-safi[afi-safi-name='IPV4_UNICAST']/state/enabledH:A
=afi-safis/afi-safi[afi-safi-name='IPV4_UNICAST']/state/activeH
at io.grpc.Status.asRuntimeException(Status.java:526)
at io.grpc.stub.ClientCalls$StreamObserverToCallListenerAdapter.onClose(ClientCalls.java:418)
at io.grpc.ForwardingClientCallListener.onClose(ForwardingClientCallListener.java:41)
at io.grpc.internal.CensusStatsModule$StatsClientInterceptor$1$1.onClose(CensusStatsModule.java:663)
at io.grpc.ForwardingClientCallListener.onClose(ForwardingClientCallListener.java:41)
at io.grpc.internal.CensusTracingModule$TracingClientInterceptor$1$1.onClose(CensusTracingModule.java:392)
at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:443)
at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:63)
at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl.close(ClientCallImpl.java:525)
at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl.access$600(ClientCallImpl.java:446)
at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:557)
at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:123)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
See here for more information https://github.com/grpc/grpc-java/issues/3800
In summary the problem was; the grpc server was failing to parse a tracing related header in the request. But there was no public interface in the client to turn it off.
when I use aws ses to send a email exception happened which show me that
com.amazonaws.AmazonServiceException:
Invalid date Mon, 14 Dec 2015 02:08:56 +00:00. It must be in one of
the formats specified by HTTP RFC 2616 section 3.3.1 (Service:
AmazonSimpleEmailService; Status Code: 400; Error Code:
InvalidParameterValue; Request ID:
e2716096-a207-11e5-9615-8135b4d7f5f9)
follows is my code :
public class SESEmailUtil {
private final String accesskey = "XXXXXXXXX";
private final String secretkey = "XXXXXXXXXXXXXXXX";
private String REGION = "us-east-1";
private Region region;
private static AWSCredentials credentials;
private static AmazonSimpleEmailServiceClient sesClient;
private static SESEmailUtil sesEmailUtil = null;
private SESEmailUtil() {
init(accesskey, secretkey);
};
public void init(String accesskey, String secretkey) {
credentials = new BasicAWSCredentials(accesskey, secretkey);
sesClient = new AmazonSimpleEmailServiceClient(credentials);
region = Region.getRegion(Regions.fromName(REGION));
sesClient.setRegion(region);
}
public static SESEmailUtil getInstance() {
if (sesEmailUtil == null) {
synchronized (SESEmailUtil.class) {
return new SESEmailUtil();
}
} else {
return sesEmailUtil;
}
}
public void sendEmail(String sender, LinkedList<String> recipients,
String subject, String body) {
Destination destination = new Destination(recipients);
try {
Content subjectContent = new Content(subject);
Content bodyContent = new Content(body);
Body msgBody = new Body(bodyContent);
Message msg = new Message(subjectContent, msgBody);
SendEmailRequest request = new SendEmailRequest(sender,
destination, msg);
SendEmailResult result = sesClient.sendEmail(request);
System.out.println(result + "Email sent");
} catch (Exception e) {
e.printStackTrace();
System.out
.println("Exception from EmailSender.java. Email not send");
}
}
}
public class TestSend {
private static String sender = "";
private static LinkedList<String> recipients = new LinkedList<String>();
static final String BODY = "This email was sent through Amazon SES by using the AWS SDK for Java.";
static final String SUBJECT = "Amazon SES test (AWS SDK for Java)";
public static void main(String args[]) {
SESEmailUtil sendUtil = SESEmailUtil.getInstance();
String receive = "qinwanghao#XXXX.com.cn";
recipients.add(receive);
sendUtil.sendEmail(sender, recipients, SUBJECT, BODY);
}
}
The code is based on the example provided by aws.
the date Mon, 14 Dec 2015 02:08:56 +00:00 is invalid , but where can I modify the format?
wish someone can help me.THK.
Issue is likely with Joda-time version. In case of this error you should check your dependency tree, with maven (mvn dependency:tree). Look for any joda-time versions conflicting with one supplied by aws-sdk. To fix: add the conflicting version of joda-time to exclusions in your pom.xml (or equivalent).
While running a storm topology I am getting this error.The topology runs perfectly for 5mins without any error then it fails.I am using
Config.TOPOLOGY_TICK_TUPLE_FREQ_SECS as 300 sec i.e 5mins.
This is my inputstream :
{"_id":{"$oid":"556809dbe4b0ef41436f7515"},"body":{"ProductCount":NumberInt(1),"category":null,"correctedWord":"bbtp","field":null,"filter":{},"fromAutocomplete":false,"loggedIn":false,"pageNo":"1","pageSize":"64","percentageMatch":NumberInt(100),"searchTerm":"bbtp","sortOrder":null,"suggestedWords":[]},"envelope":{"IP":"115.115.115.98","actionType":"search","sessionId":"10536088910863418864","timestamp":{"$date":"2015-05-29T06:40:00.000Z"}}}
This is the complete error:
java.lang.RuntimeException: java.lang.ClassCastException: java.lang.Long
cannot be cast to java.lang.String at
backtype.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:128) at
backtype.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:99) at
backtype.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:80) at
backtype.storm.daemon.executor$fn__4722$fn__4734$fn__4781.invoke(executor.clj:748) at
backtype.storm.util$async_loop$fn__458.invoke(util.clj:463) at
clojure.lang.AFn.run(AFn.java:24) at
java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.ClassCastException: java.lang.Long cannot be cast to
java.lang.String at
backtype.storm.tuple.TupleImpl.getString(TupleImpl.java:112) at
com.inferlytics.InferlyticsStormConsumer.bolt.QueryNormalizer.execute(QueryNor
malizer.java:40) at
backtype.storm.topology.BasicBoltExecutor.execute(BasicBoltExecutor.java:50) at
backtype.storm.daemon.executor$fn__4722$tuple_action_fn__4724.invoke(executor.clj:633) at
backtype.storm.daemon.executor$mk_task_receiver$fn__4645.invoke(executor.clj:4
04) at
backtype.storm.disruptor$clojure_handler$reify__1446.onEvent(disruptor.clj:58) at
backtype.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:125) ... 6 more
My topology :
public class TopologyMain {
private static final org.slf4j.Logger LOG = org.slf4j.LoggerFactory
.getLogger(TopologyMain.class);
private static final String SPOUT_ID = "Feed-Emitter";
/**
* #param args
*/
/**
* #param args
* #throws AlreadyAliveException
* #throws InvalidTopologyException
*/
/**
* #param args
* #throws AlreadyAliveException
* #throws InvalidTopologyException
*/
public static void main(String[] args) throws AlreadyAliveException, InvalidTopologyException {
int numSpoutExecutors = 1;
LOG.info("This is SpoutConfig");
KafkaSpout kspout = QueryCounter();
TopologyBuilder builder = new TopologyBuilder();
LOG.info("This is Set Spout");
builder.setSpout(SPOUT_ID, kspout, numSpoutExecutors);
LOG.info("This is Query-Normalizer bolt");
builder.setBolt("Query-normalizer", new QueryNormalizer())
.shuffleGrouping(SPOUT_ID);
LOG.info("This is Query-ProductCount bolt");
builder.setBolt("Query-ProductCount", new QueryProductCount(),1)
.shuffleGrouping("Query-normalizer", "stream1");
LOG.info("This is Query-SearchTerm bolt");
builder.setBolt("Query-SearchTerm", new QuerySearchTermCount(),1)
.shuffleGrouping("Query-normalizer", "stream2");
LOG.info("This is tick-tuple bolt");
builder.setBolt("Tick-Tuple", new TickTuple(),1)
.shuffleGrouping("Query-normalizer", "stream3");
/*
* Storm Constants
* */
String NIMBUS_HOST = FilePropertyManager.getProperty( ApplicationConstants.STORM_CONSTANTS_FILE,
ApplicationConstants.NIMBUS_HOST );
String NIMBUS_THRIFT_PORT = FilePropertyManager.getProperty( ApplicationConstants.STORM_CONSTANTS_FILE,
ApplicationConstants.NIMBUS_THRIFT_PORT );
String TOPOLOGY_TICK_TUPLE_FREQ_SECS = FilePropertyManager.getProperty( ApplicationConstants.STORM_CONSTANTS_FILE,
ApplicationConstants.TOPOLOGY_TICK_TUPLE_FREQ_SECS );
String STORM_JAR = FilePropertyManager.getProperty( ApplicationConstants.STORM_CONSTANTS_FILE,
ApplicationConstants.STORM_JAR );
String SET_NUM_WORKERS = FilePropertyManager.getProperty( ApplicationConstants.STORM_CONSTANTS_FILE,
ApplicationConstants.SET_NUM_WORKERS );
String SET_MAX_SPOUT_PENDING = FilePropertyManager.getProperty( ApplicationConstants.STORM_CONSTANTS_FILE,
ApplicationConstants.SET_MAX_SPOUT_PENDING );
final int setNumWorkers = Integer.parseInt(SET_NUM_WORKERS);
final int setMaxSpoutPending = Integer.parseInt(SET_MAX_SPOUT_PENDING);
final int nimbus_thirft_port = Integer.parseInt(NIMBUS_THRIFT_PORT);
final int topology_tick_tuple_freq_secs = Integer.parseInt(TOPOLOGY_TICK_TUPLE_FREQ_SECS);
/*
* Storm Configurations
*/
LOG.trace("Setting Configuration");
Config conf = new Config();
LocalCluster cluster = new LocalCluster();
conf.put(Config.NIMBUS_HOST, NIMBUS_HOST);
conf.put(Config.NIMBUS_THRIFT_PORT, nimbus_thirft_port);
conf.put(Config.TOPOLOGY_TICK_TUPLE_FREQ_SECS, topology_tick_tuple_freq_secs);
System.setProperty("storm.jar",STORM_JAR );
conf.setNumWorkers(setNumWorkers);
conf.setMaxSpoutPending(setMaxSpoutPending);
if (args != null && args.length > 0) {
LOG.trace("Storm Topology Submitted On CLuster");
StormSubmitter. submitTopology(args[0], conf, builder.createTopology());
}
else
{
LOG.trace("Storm Topology Submitted On Local");
cluster.submitTopology("Query", conf, builder.createTopology());
Utils.sleep(10000);
cluster.killTopology("Query");
LOG.trace("This is ShutDown cluster");
cluster.shutdown();
}
LOG.trace("Method: main finished.");
}
private static KafkaSpout QueryCounter() {
//Build a kafka spout
/*
* Kafka Constants
*/
final String topic = FilePropertyManager.getProperty( ApplicationConstants.KAFKA_CONSTANTS_FILE,
ApplicationConstants.TOPIC );
String zkHostPort = FilePropertyManager.getProperty( ApplicationConstants.KAFKA_CONSTANTS_FILE,
ApplicationConstants.ZOOKEEPER_CONNECTION_STRING );
String zkRoot = "/Feed-Emitter";
String zkSpoutId = "Feed-Emitter-spout";
ZkHosts zkHosts = new ZkHosts(zkHostPort);
LOG.trace("This is Inside kafka spout ");
SpoutConfig spoutCfg = new SpoutConfig(zkHosts, topic, zkRoot, zkSpoutId);
spoutCfg.scheme = new SchemeAsMultiScheme(new StringScheme());
KafkaSpout kafkaSpout = new KafkaSpout(spoutCfg);
LOG.trace("Returning From kafka spout ");
return kafkaSpout;
}
}
My QueryNormalizer Bolt :
public class QueryNormalizer extends BaseBasicBolt {
/**
*
*/
private static final org.slf4j.Logger LOG = org.slf4j.LoggerFactory
.getLogger(QueryNormalizer.class);
public void cleanup() {}
/**
* The bolt will receive the line from the
* feed file and process it to Normalize this line
*
* The normalize will be put the terms in lower case
* and split the line to get all terms.
*/
public void execute(Tuple input, BasicOutputCollector collector) {
LOG.trace("Method in QueryNormalizer: execute called.");
String feed = input.getString(0);
String searchTerm = null;
String pageNo = null;
boolean sortOrder = true;
boolean category = true;
boolean field = true;
boolean filter = true;
String pc = null;
int ProductCount = 0;
String timestamp = null;
String year = null;
String month = null;
String day = null;
String hour = null;
Calendar calendar = Calendar.getInstance();
int dayOfYear = calendar.get(Calendar.DAY_OF_YEAR);
int weekOfYear = calendar.get(Calendar.WEEK_OF_YEAR);
JSONObject obj = null;
try {
obj = new JSONObject(feed);
} catch (JSONException e1) {
LOG.error( "Json Exception in Query Normalizer", e1 );
}
try {
searchTerm = obj.getJSONObject("body").getString("correctedWord");
pageNo = obj.getJSONObject("body").getString("pageNo");
sortOrder = obj.getJSONObject("body").isNull("sortOrder");
category = obj.getJSONObject("body").isNull("category");
field = obj.getJSONObject("body").isNull("field");
filter = obj.getJSONObject("body").getJSONObject("filter").isNull("filters");
pc = obj.getJSONObject("body").getString("ProductCount").replaceAll("[^\\d]", "");
ProductCount = Integer.parseInt(pc);
timestamp = (obj.getJSONObject("envelope").get("timestamp")).toString().substring(10,29);
year = (obj.getJSONObject("envelope").get("timestamp")).toString().substring(10, 14);
month = (obj.getJSONObject("envelope").get("timestamp")).toString().substring(15, 17);
day = (obj.getJSONObject("envelope").get("timestamp")).toString().substring(18, 20);
hour = (obj.getJSONObject("envelope").get("timestamp")).toString().substring(21, 23);
} catch (JSONException e) {
LOG.error( "Parsing Value Exception in Query Normalizer", e );
}
searchTerm = searchTerm.trim();
//Condition to eliminate pagination
if(!searchTerm.isEmpty()){
if ((pageNo.equals("1")) && (sortOrder == true) && (category == true) && (field == true) && (filter == true)){
searchTerm = searchTerm.toLowerCase();
System.out.println("In QueryProductCount execute: "+searchTerm+","+year+","+month+","+day+","+hour+","+dayOfYear+","+weekOfYear+","+ProductCount);
System.out.println("Entire Json : "+feed);
System.out.println("In QuerySearchCount execute : "+searchTerm+","+year+","+month+","+day+","+hour);
LOG.trace("In QueryNormalizer execute : "+searchTerm+","+year+","+month+","+day+","+hour+","+dayOfYear+","+weekOfYear+","+ProductCount);
LOG.trace("In QueryNormalizer execute : "+searchTerm+","+year+","+month+","+day+","+hour);
collector.emit("stream1", new Values(searchTerm , year , month , day , hour , dayOfYear , weekOfYear , ProductCount ));
collector.emit("stream2", new Values(searchTerm , year , month , day , hour ));
collector.emit("stream3", new Values());
}LOG.trace("Method in QueryNormalizer: execute finished.");
}
}
/**
* The bolt will only emit the specified streams in collector
*/
public void declareOutputFields(OutputFieldsDeclarer declarer) {
declarer.declareStream("stream1", new Fields("searchTerm" ,"year" ,"month" ,"day" ,"hour" ,"dayOfYear" ,"weekOfYear" ,"ProductCount"));
declarer.declareStream("stream2", new Fields("searchTerm" ,"year" ,"month" ,"day" ,"hour"));
declarer.declareStream("stream3", new Fields());
}
}
In the QueryNormalizer class the error is shown at this line
String feed = input.getString(0);
public void execute(Tuple input, BasicOutputCollector collector) {
LOG.trace("Method in QueryNormalizer: execute called.");
String feed = input.getString(0);
String searchTerm = null;
Caused by: java.lang.ClassCastException: java.lang.Long cannot be cast to
java.lang.String at
backtype.storm.tuple.TupleImpl.getString(TupleImpl.java:112) at
com.inferlytics.InferlyticsStormConsumer.bolt.QueryNormalizer.execute(QueryNor
malizer.java:40)
EDIT :
After removing Config.TOPOLOGY_TICK_TUPLE_FREQ_SECS from the config the code works properly.But I have to implement Tick Tuple () . How to achieve it?
I guess there is some problem with my TickTuple class. Is this the right way to Implement it ?
TickTuple
public class TickTuple extends BaseBasicBolt {
private static final long serialVersionUID = 1L;
private static final org.slf4j.Logger LOG = org.slf4j.LoggerFactory
.getLogger(TickTuple.class);
private static final String KEYSPACE = FilePropertyManager.getProperty( ApplicationConstants.CASSANDRA_CONSTANTS_FILE,
ApplicationConstants.KEYSPACE );
private static final String MONGO_DB = FilePropertyManager.getProperty( ApplicationConstants.MONGO_CONSTANTS_FILE,
ApplicationConstants.MONGO_DBE );
private static final String TABLE_CASSANDRA_TOP_QUERY = FilePropertyManager.getProperty( ApplicationConstants.CASSANDRA_CONSTANTS_FILE,
ApplicationConstants.TABLE_CASSANDRA_TOP_QUERY );
private static final String MONGO_COLLECTION_E = FilePropertyManager.getProperty( ApplicationConstants.MONGO_CONSTANTS_FILE,
ApplicationConstants.MONGO_COLLECTION_E );
public void cleanup() {
}
protected static boolean isTickTuple(Tuple tuple) {
return tuple.getSourceComponent().equals(Constants.SYSTEM_COMPONENT_ID)
&& tuple.getSourceStreamId().equals(Constants.SYSTEM_TICK_STREAM_ID);
}
#Override
public void declareOutputFields(OutputFieldsDeclarer declarer) {}
#Override
public void execute(Tuple input, BasicOutputCollector collector) {
try {
if (isTickTuple(input)) {
CassExport.cassExp(KEYSPACE, TABLE_CASSANDRA_TOP_QUERY, MONGO_DB, MONGO_COLLECTION_E);
TruncateCassandraTable.truncateData(TABLE_CASSANDRA_TOP_QUERY);
Log.trace("In Truncate");
return;
}
} catch (Exception e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}
Can Anyone please suggest the required changes in the code ?
Now I understand: You have data tuples and tick tuples in the same input stream. Thus, for data tuples the first field is of type String, but for tick tuples it is of type Long. Thus, input.getString(0) runs in the ClassCastException for the first arriving tick tuple.
You need to update you bolt code like this:
Object field1 = input.getValue(0);
if (field1 instanceof Long) {
Long tick = (Long)field1;
// process tick tuple further
} else {
String feed = (String)field1;
// process data tuple as you did already
}
You need to differentiate between tick tuples and normal tuples within your execute method. Add this method to your bolt :
public boolean isTickTuple(Tuple tuple) {
return tuple.getSourceComponent().equals(Constants.SYSTEM_COMPONENT_ID)
&& tuple.getSourceStreamId().equals(Constants.SYSTEM_TICK_STREAM_ID);
}
Now in execute, you can do
if(isTickTuple(tuple)){
doSomethingPeriodic()
} else {
executeLikeBefore()
}
The problem was with my TickTuple Bolt Implementation.I have added the
conf.put(Config.TOPOLOGY_TICK_TUPLE_FREQ_SECS, topology_tick_tuple_freq_secs)
In my main topology configuration.Instead it should be added in the bolt where TickTuple has be implemented.
I edited my TickTuple code , added this snippet and everything works fine.
#Override
public Map<String, Object> getComponentConfiguration() {
// configure how often a tick tuple will be sent to our bolt
Config conf = new Config();
conf.put(Config.TOPOLOGY_TICK_TUPLE_FREQ_SECS, topology_tick_tuple_freq_secs);
return conf;
}
This has to be added in the corresponding bolt instead of the main topology