key definition for 'fetch.message.max.bytes' in Kafka - java

I am not sure how to define the key for the message size of my KafkaSpouts.
My example:
Map<String, Object> props = new HashMap<>();
props.put("fetch.message.max.bytes", "2097152"); // 2MB
props.put(KafkaSpoutConfig.Consumer.GROUP_ID, group);
I searched for the constant key definition of "fetch.message.max.bytes" without succeed.
I expect this key in KafkaSpoutConfig.Consumer or at least KafkaSpoutConfig.
Anyone know the correct location?

Storm's KafkaSpout does not offer all available keys as perdefined members. However, if you know the name of the key, you can safely use a String (as shown in your example) of use a Kafka class that defines the key.

Related

Synchronization with Guava HashBiMap and synchronizedBiMap

I am getting an exception from a Guava BiMap's putIfAbsent method in a multi-thread situation. How should I correctly protect it from threading problems?
I create the map like this:
BiMap<Integer, java.net.URI> cache = com.google.common.collect.Maps.synchronizedBiMap(HashBiMap.create());
Then, the only times I ever modify the map are by cache.clear(); or cache.putIfAbsent(a,b)
I have occasionally seen this stack trace:
java.lang.IllegalArgumentException: value already present: http://example.com
at com.google.common.collect.HashBiMap.put(HashBiMap.java:279)
at com.google.common.collect.HashBiMap.put(HashBiMap.java:260)
at java.util.Map.putIfAbsent(Map.java:744)
at com.google.common.collect.Synchronized$SynchronizedMap.putIfAbsent(Synchronized.java:1120)
Is this a bug in HashBiMap or synchronizedBiMap? Or do I need to do extra work for thread safety?
Using guava-25.0-jre and Java(TM) SE Runtime Environment 1.8.0_152-b16
Because a BiMap provides a mapping from values to keys, as well as the usual Map mapping from keys to values, each value can only be paired with a single key. Trying to associate a value with more than one unique key will result in an IllegalArgumentException that you are seeing.
It does not sounds like your issue is threading related, rather data related.
As a example, this will throw a similar exception. The problem is the presence of value "Bar" with two separate keys "Foo" and "Baz":
public static void main(String[] args) {
BiMap<String, String> m = HashBiMap.create();
m.put("Foo", "Bar");
m.put("Baz", "Bar"); // Throws IllegalArgumentException "value already present"
}
This doesn't have anything to do with synchronization, but it's how BiMap works. You can reproduce it easily:
cache.putIfAbsent(1, URI.create("http://example.com"));
cache.putIfAbsent(2, URI.create("http://stackoverflow.com"));
System.out.println(cache);
// {1=http://example.com, 2=http://stackoverflow.com}
cache.putIfAbsent(3, URI.create("http://example.com"));
// java.lang.IllegalArgumentException: value already present: http://example.com
BiMap is "a map that preserves the uniqueness of its values as well as that of its keys." This means that you can't put example.com again, even under different key. See also wiki page describing BiMap:
BiMap.put(key, value) will throw an IllegalArgumentException if you attempt to map a key to an already-present value. If you wish to delete any preexisting entry with the specified value, use BiMap.forcePut(key, value) instead.
In your case you could use forcePut and not fail with an exception:
cache.forcePut(3, URI.create("http://example.com"));
System.out.println(cache);
// {2=http://stackoverflow.com, 3=http://example.com}

Kafka streams not using serde after repartitioning

My Kafka Streams application is consuming from a kafka topic that is using the following key-value layout:
String.class -> HistoryEvent.class
When printing my current topic this can be confirmed:
bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic flow-event-stream-file-service-test-instance --property print.key=true --property key.separator=" -- " --from-beginning
flow1 -- SUCCESS #C:\Daten\file-service\in\crypto.p12
"flow1" is the String key and the part after -- is the serialized value.
My flow is set up like this:
KStream<String, HistoryEvent> eventStream = builder.stream(applicationTopicName, Consumed.with(Serdes.String(),
historyEventSerde));
eventStream.selectKey((key, value) -> new HistoryEventKey(key, value.getIdentifier()))
.groupByKey()
.reduce((e1, e2) -> e2,
Materialized.<HistoryEventKey, HistoryEvent, KeyValueStore<Bytes, byte[]>>as(streamByKeyStoreName)
.withKeySerde(new HistoryEventKeySerde()));
So as far as I know I am telling it to consume the topic using String and HistoryEvent serde as this is what is in the topic. I then 'rekey' it to use a combined key which should be stored locally using the provided serde for HistoryEventKey.class. As far as I understand this will cause an additional topic to be created (can be seen with topic list in the kafka container) with the new key. This is fine.
Now the problem is the application is unable to start up even from a clean environment with just that one document in the topic:
org.apache.kafka.streams.errors.StreamsException: Exception caught in process. taskId=0_0, processor=KSTREAM-SOURCE-0000000000, topic=flow-event-stream-file-service-test-instance, partition=0, offset=0
Caused by: org.apache.kafka.streams.errors.StreamsException: A serializer (key: org.apache.kafka.common.serialization.StringSerializer / value: HistoryEventSerializer) is not compatible to the actual key or value type (key type: HistoryEventKey / value type: HistoryEvent). Change the default Serdes in StreamConfig or provide correct Serdes via method parameters.
It is kinda hard to tell from the message where exactly the issue is. It says in my base topic but that is not possible as the key there is not of type HistoryEventKey. Since I have provided a serde for HistoryEventKey in the reduce it also cannot be with the local store.
The only thing that makes sense to me is that it is related to the selectKey operation that causes a rearranging and a new topic. However I am not able to figure out how I can provide the serde to that operation. I do not want to set it as a default, because it is not the default key serde.
After doing some more debugging of the execution I was able to figure out that the new topic is created in the groupByKey step. You can provide a Grouped instance that offers the possibility to specify the Serde used for key and value:
eventStream.selectKey((key, value) -> new HistoryEventKey(key, value.getIdentifier()))
.groupByKey(Grouped.<HistoryEventKey, HistoryEvent>as(null)
.withKeySerde(new HistoryEventKeySerde())
.withValueSerde(new HistoryEventSerde())
)
.reduce((e1, e2) -> e2,
Materialized.<HistoryEventKey, HistoryEvent, KeyValueStore<Bytes, byte[]>>as(streamByKeyStoreName)
.withKeySerde(new HistoryEventKeySerde()));
I've encountered a very similar error message, yet I had no groupbys, but joins instead. I'm posting here for the next person that googles around.
org.apache.kafka.streams.errors.StreamsException: ClassCastException while producing data to topic my-processor-KSTREAM-MAP-0000000023-repartition. A serializer (key: org.apache.kafka.common.serialization.StringSerializer / value: org.apache.kafka.common.serialization.StringSerializer) is not compatible to the actual key or value type (key type: java.lang.String / value type: com.mycorp.mySession). Change the default Serdes in StreamConfig or provide correct Serdes via method parameters (for example if using the DSL, `#to(String topic, Produced<K, V> produced)` with `Produced.keySerde(WindowedSerdes.timeWindowedSerdeFrom(String.class))`).
Clearly, same as in the original question, I did not want to change the default serdes.
So in my case the solution was to pass a Joined instance in the join, which will allow to pass in the serdes. Note that the error message points to a repartition-MAP-... which is a bit of a red herring, because the fix goes somewhere else.
how I fixed it (a joined example)
//...omitted ...
KStream<String,MySession> mySessions = myStream
.map((k,v) ->{
MySession s = new MySession(v);
k = s.makeKey();
return new KeyValue<>(k, s);
});
// ^ the mapping causes the repartition, you can not however specify a serde in there.
// but in the join right below, we can pass a JOINED instance and fix it.
return enrichedSessions
.leftJoin(
myTable,
(session, info) -> {
session.infos = info;
return session; },
Joined.as("my_enriched_session")
.keySerde(Serdes.String())
.valueSerde(MySessionSerde())
);

RedisTemplate keys(String pattern) method is giving empty set

Using org.springframework.data.redis.core.RedisTemplate for storing data in redis server. I have the keys in pattern similar to "abc#xyz#pqr". Wanted to get all the keys which have the starting letters as "abc", and was using RedisTemplate.keys(String pattern) method for the same as Below:
Set<String> redisKeys = redisTemplate.keys("(abc).*");
for (String key : redisKeys) {
System.out.println(key);
}
But its always giving me empty set.
// tried this pattern also
Set<String> redisKeys = redisTemplate.keys("abc*");
Please help me out.
Make sure to use StringRedisSerializer to serialize keys. Spring Data Redis defaults to JdkSerializationRedisSerializer which does not allow glob-style search because of the way it works.
Check out the reference documentation for more details.

Redis storing list inside hash

I have to store some machine details in redis. As there are many different machines i am planning to use the below structure
server1 => {name => s1, cpu=>80}
server2 => {name => s2, cpu=>40}
I need to store more than one value against the key CPU. Also i need to maintain only the last 10 values in the list of values against cpu
1) How can i store a list against the key inside the hash?
2) I read about ltrim. But it accepts a key. How can i do a ltrim for key cpu inside server1?
I am using jedis.
Redis' data structures cannot be nested inside other data structures, so storing a List inside a Hash is not possible. Instead, use different keys for your servers' CPU values (e.g. server1:cpu).
It's possible to do this with Redisson framework. It allows to store a reference to Redis object in another Redis object though special reference objects which handled by Redisson.
So your task could be solved using List inside Map:
RMap<String, RList<Option>> settings = redisson.getMap("settings");
RList<Option> options1 = redisson.getList("settings_server1_option");
options1.add(new Option("name", "s1"));
options1.add(new Option("cpu", "80"));
settings.put("server1", options1);
RList<Option> options2 = redisson.getList("settings_server2_option");
options2.add(new Option("name", "s2"));
options2.add(new Option("cpu", "40"));
settings.put("server2", options2);
// read it
RList<Option> options2Value = settings.get("server2");
Or using Map inside Map:
RMap<String, RMap<String, String>> settings = redisson.getMap("settings");
RMap<String, String> options1 = redisson.getMap("settings_server1_option");
options1.put("name", "s1");
options1.put("cpu", "80");
settings.put("server1", options1);
RMap<String, String> options2 = redisson.getMap("settings_server2_option");
options2.put("name", "s2");
options2.put("cpu", "40");
settings.put("server2", options1);
// read it
RMap<String, String> options2Value = settings.get("server2");
Diclamer: I'm a developer of Redisson
You can encode/stringify push the data, while pulling data you can decode/parse the data.
Encode -> Decode
Stringify -> Parse

Shortest way to reverse Properties

In Java I have a java.util.Properties object and I want to obtain another one with the same pairs but keys converted to values and viceversa.
If there are collision (i.e. there are two equal values) then just pick as value an arbitrary key.
What is the shortest way to do it?
Feel free to use libraries, commons-collections, or whatever.
You can consider using a BiMap by google collections which is essentially a reversable Map. It guarantees uniquness of keys as well as values.
Check it out here. This is the API
A Properties object is a Hashtable object, so you should be able to do something like:
Hashtable<String, String> reversedProps = new Hashtable<String, String>();
for (String key : props.keySet()) {
reversedProps.put(props.get(key), key);
}
Result: 3 lines of code.
This code is untested, but it should give you the idea.
Something like:
Properties fowards = new Properties();
fowards.load(new FileInputStream("local.properties"));
Properties backwards = new Properties();
for (String propertyName : fowards.stringPropertyNames())
{
backwards.setProperty(forwards.get(propertyName), propertyName);
}

Categories

Resources