Do we have custom serialization capability for EntryProcessor or ExecutorService ?. Hazelcast document is not specifying anything in this regard. There are no samples given in the document related to custom serialization of EntryProcessor. We are looking for a Portable serialization of the EntryProcessor.
public class SampleEntryProcessor implements EntryProcessor<SampleDataKey, SampleDataValue , SampleDataValue >,Portable {
/**
*
*/
private static final long serialVersionUID = 1L;
private SampleDataValue sampleDataValue ;
public SampleDataValue process(Map.Entry<SampleDataKey, SampleDataValue > entry) {
//Sample logic here
return null;
}
#Override
public int getFactoryId() {
return 1;
}
#Override
public int getClassId() {
return 1;
}
#Override
public void writePortable(PortableWriter writer) throws IOException {
writer.writePortable("i", sampleDataValue );
}
#Override
public void readPortable(PortableReader reader) throws IOException {
sampleDataValue = reader.readPortable("i");
}
}
UPDATE : When i try to call processor am getting error as follows.
Exception in thread "main" java.lang.ClassCastException: com.hazelcast.internal.serialization.impl.portable.DeserializedPortableGenericRecord cannot be cast to com.hazelcast.map.EntryProcessor
at com.hazelcast.client.impl.protocol.task.map.MapExecuteOnKeyMessageTask.prepareOperation(MapExecuteOnKeyMessageTask.java:42)
at com.hazelcast.client.impl.protocol.task.AbstractPartitionMessageTask.processInternal(AbstractPartitionMessageTask.java:45)
Yes, you can use different serialization mechanisms to serialize entry processors, provided that they are correctly configured on the sender and receiver sides. So, after making sure that the Portable factory for your class is registered on the members and on the instance you are sending the entry processor from (for example, your client), it should work.
Related
This question is related to my another SO question.
To keep IndexWriter open for the duration of a partitioned step, I thought to add IndexWriter in ExecutionContext of partitioner and then close in a StepExecutionListenerSupport 's afterStep(StepExecution stepExecution) method.
Challenge that I am facing in this approach is that ExecutionContext needs Objects to be serializable.
In light of these two questions, Q1, Q2 -- it doesn't seem feasible because I can't add a no - arg constructor in my custom writer because IndexWriter doesn't have any no - arg constructor.
public class CustomIndexWriter extends IndexWriter implements Serializable {
/*
private Directory d;
private IndexWriterConfig conf;
public CustomIndexWriter(){
super();
super(this.d, this.conf);
}
*/
public CustomIndexWriter(Directory d, IndexWriterConfig conf) throws IOException {
super(d, conf);
}
/**
*
*/
private static final long serialVersionUID = 1L;
private void readObject(ObjectInputStream input) throws IOException, ClassNotFoundException{
input.defaultReadObject();
}
private void writeObject(ObjectOutputStream output) throws IOException, ClassNotFoundException {
output.defaultWriteObject();
}
}
In above code, I can't add constructor shown as commented because no - arg constructor doesn't exist in Super class and can't access this fields before super .
Is there a way to achieve this?
You can always add a parameter-less constructor.
E.g:
public class CustomWriter extends IndexWriter implements Serializable {
private Directory lDirectory;
private IndexWriterConfig iwConfig;
public CustomWriter() {
super();
// Assign default values
this(new Directory("." + System.getProperty("path.separator")), new IndexWriterConfig());
}
public CustomWriter(Directory dir, IndexWriterConfig iwConf) {
lDirectory = dir;
iwConfig = iwConf;
}
public Directory getDirectory() { return lDirectory; }
public IndexWriterConfig getConfig() { return iwConfig; }
public void setDirectory(Directory dir) { lDirectory = dir; }
public void setConfig(IndexWriterConfig conf) { iwConfig = conf; }
// ...
}
EDIT:
Having taken a look at my own code (using Lucene.Net), the IndexWriter needs an analyzer, and a MaxFieldLength.
So the super-call would look something like this:
super(new Directory("." + System.getProperty("path.separator")), new StandardAnalyzer(), MaxFieldLength.UNLIMITED);
So adding these values as defaults should fix the issue. Maybe then add getter- and setter-methods for the analyzer and MaxFieldLength, so you have control over that at a later stage.
I am not sure how but this syntax works in Spring Batch and ExecutionContext returns a non - null Object in StepExecutionListenerSupport.
public class CustomIndexWriter implements Serializable {
private static final long serialVersionUID = 1L;
private transient IndexWriter luceneIndexWriter;
public CustomIndexWriter(IndexWriter luceneIndexWriter) {
this.luceneIndexWriter=luceneIndexWriter;
}
public IndexWriter getLuceneIndexWriter() {
return luceneIndexWriter;
}
public void setLuceneIndexWriter(IndexWriter luceneIndexWriter) {
this.luceneIndexWriter = luceneIndexWriter;
}
}
I put an instance of CustomIndexWriter in step partitioner, partitioned step chunk works with writer by doing, getLuceneIndexWriter() and then in StepExecutionListenerSupport , I close this writer.
This way my spring batch partitioned step works with a single instance of Lucene Index Writer Object.
I was hoping that I will get a NullPointer if trying to perform operation on writer obtained by getLuceneIndexWriter() but that doesn't happen ( despite it being transient ). I am not sure why this works but it does.
For Spring Batch job metadata, I am using in - memory repository and not db based repository. Not sure if this will continue to work once I start using db for metadata.
Below is the object to be serialized.
Issues:
1. The object to be serialized is abstract and the classes which extends it are complex objects.I get Instantiation Error.
2. I thought to serialize only HubMessageBean which is part of the data stored in hazelcast IMap, but it only increased the heap space used but it did not decrease it.
Please let me know if you could see any better option to serialize these objects. Right now, Java serialization is already in place.
public abstract class HubMessageAggregate implements Serializable {
abstract public void addHubMessage(HubMessageBean message) throws IllegalArgumentException;
abstract public boolean isEmptyAggregate();
}
Below is one of the class which implements the class to be serialized. I do have many such classes.
public class CustomerAggregate extends HubMessageAggregate {
private static final long serialVersionUID = 4196808128400143445L;
private static Logger LOG = Logger.getLogger(CustomerAggregate.class);
#Override
public void addHubMessage(HubMessageBean message) {
if(message == null) {
LOG.fatal("Null message passed into aggregate. WTF?");
return;
}
if(message instanceof AccountMessage) {
getAccountMessageList().add((AccountMessage) message);
} else if (message instanceof CustomerMessage) {
getCustomerMessageList().add((CustomerMessage) message);
}
private final Comparator<HubMessageBean> compareByMessageDateAscending = new HubMessageByTimestampAscending();
#Override
public int compare(HubMessageBean a, HubMessageBean b) {
return compareByMessageDateAscending.compare(a, b) * -1;
}
};
private List<AccountMessage> accountMessageList;
private List<CustomerMessage> customerMessageList;
private List<IdentifierMessage> identifierMessageList;
private List<AddressMessage> addressMessageList;
private List<CompanySpecialtyMessage> specialtyMessageList;
private List<CompanySegmentationMessage> segmentationMessageList;
private List<AccountLocationMessage> accountLocationMessageList;
private List<TelecomNumberMessage> telecomNumberMessageList;
private List<AccountAssociationMessage> accountAssociationMessageList;
Since serialization happens on its own, below is how I deserialize the object which results in instantiation error.
HubMessageAggregate aggregate = messageStash.getAggregate(key);
Below is the KryoSerializer which i wrote which has the necessary config added.
public class HubMessageAggregateKryoSerializer implements StreamSerializer {
private static final ThreadLocal<Kryo> kryoThreadLocal
= new ThreadLocal() {
#Override
protected Kryo initialValue() {
Kryo kryo = new Kryo();
kryo.register(HubMessageAggregate.class);
return kryo;
}
};
StackTrace of Error:
com.hazelcast.nio.serialization.HazelcastSerializationException: java.lang.InstantiationError: com.manheim.webservices.ovcoutbound.beans.aggregate.HubMessageAggregate
at com.hazelcast.nio.serialization.SerializationServiceImpl.handleException(SerializationServiceImpl.java:298)[hazelcast-3.2.4.jar:3.2.4]
at com.hazelcast.nio.serialization.SerializationServiceImpl.toObject(SerializationServiceImpl.java:227)[hazelcast-3.2.4.jar:3.2.4]
at com.hazelcast.spi.impl.NodeEngineImpl.toObject(NodeEngineImpl.java:156)[hazelcast-3.2.4.jar:3.2.4]
at com.hazelcast.map.MapService.toObject(MapService.java:852)[hazelcast-3.2.4.jar:3.2.4]
at com.hazelcast.map.proxy.MapProxyImpl.get(MapProxyImpl.java:53)[hazelcast-3.2.4.jar:3.2.4]
at com.manheim.webservices.ovcoutbound.cep.cache.MessageStashHazelcastProvider.stashHubMessage(MessageStashHazelcastProvider.java:53)[MessageStashHazelcastProvider.class:]
at com.manheim.webservices.ovcoutbound.cep.cache.MessageStashStrategy.aggregate(MessageStashStrategy.java:54)[MessageStashStrategy.class:]
at org.apache.camel.processor.aggregate.AggregateProcessor.onAggregation(AggregateProcessor.java:474)[camel-core-2.14.0.jar:2.14.0]
at org.apache.camel.processor.aggregate.AggregateProcessor.doAggregation(AggregateProcessor.java:320)[camel-core-2.14.0.jar:2.14.0]
at org.apache.camel.processor.aggregate.AggregateProcessor.doProcess(AggregateProcessor.java:254)[camel-core-2.14.0.jar:2.14.0]
at org.apache.camel.processor.aggregate.AggregateProcessor.process(AggregateProcessor.java:179)[camel-core-2.14.0.jar:2.14.0]
at org.apache.camel.management.InstrumentationProcessor.process(InstrumentationProcessor.java:72)[camel-core-2.14.0.jar:2.14.0]
at org.apache.camel.processor.RedeliveryErrorHandler.process(RedeliveryErrorHandler.java:398)[camel-core-2.14.0.jar:2.14.0]
at org.apache.camel.processor.CamelInternalProcessor.process(CamelInternalProcessor.java:191)[camel-core-2.14.0.jar:2.14.0]
at org.apache.camel.processor.ChoiceProcessor.process(ChoiceProcessor.java:111)[camel-core-2.14.0.jar:2.14.0]
at org.apache.camel.management.InstrumentationProcessor.process(InstrumentationProcessor.java:72)[camel-core-2.14.0.jar:2.14.0]
at org.apache.camel.processor.RedeliveryErrorHandler.process(RedeliveryErrorHandler.java:398)[camel-core-2.14.0.jar:2.14.0]
I try to create a new language support for NetBeans 7.4 and higher.
When files are being saved locally I need to deploy them to a server. So I need to handle the save event. I did this implementing Savable:
public class VFDataObject extends MultiDataObject implements Savable {
.......
#Override
public void save() throws IOException {
.......
}
}
And it worked perfectly for the Save event. But then I realized I need to extend HtmlDataObject instead of MultiDataObject:
public class VFDataObject extends HtmlDataObject implements Savable {
.......
#Override
public void save() throws IOException {
.......
}
}
And now the save() doesn't get executed. Why? Since HtmlDataObject extends MultiDataObject. What should be done to make that work?
Also is there a way to catch Save All event in NetBeans as well? Do you have any info on if anything changed in 8.0 in this regards?
Thanks a lot.
Have you tried OnSaveTask SPI (https://netbeans.org/bugzilla/show_bug.cgi?id=140719)? The API can be used to perform tasks when files of a given type are saved.
Something like this can be used to listen to all the save events on a given MIME type (in this case "text/x-sieve-java"):
public static class CustomOnSaveTask implements OnSaveTask {
private final Context context;
public CustomOnSaveTask(Context ctx) {
context = ctx;
}
#Override
public void performTask() {
System.out.println(">>> Save performed on " +
NbEditorUtilities.getDataObject(context.getDocument()).toString());
}
#Override
public void runLocked(Runnable r) {
r.run();
}
#Override
public boolean cancel() {
return true;
}
#MimeRegistration(mimeType = "text/x-sieve-java", service = OnSaveTask.Factory.class, position = 1600)
public static class CustomOnSaveTaskFactory implements OnSaveTask.Factory {
#Override
public OnSaveTask createTask(Context cntxt) {
return new CustomOnSaveTask(cntxt);
}
}
}
I am using guava-libraries LoadingCache to cache classes in my app.
Here is the class I have came up with.
public class MethodMetricsHandlerCache {
private Object targetClass;
private Method method;
private Configuration config;
private LoadingCache<String, MethodMetricsHandler> handlers = CacheBuilder.newBuilder()
.maximumSize(1000)
.build(
new CacheLoader<String, MethodMetricsHandler>() {
public MethodMetricsHandler load(String identifier) {
return createMethodMetricsHandler(identifier);
}
});
private MethodMetricsHandler createMethodMetricsHandler(String identifier) {
return new MethodMetricsHandler(targetClass, method, config);
}
public void setTargetClass(Object targetClass) {
this.targetClass = targetClass;
}
public void setMethod(Method method) {
this.method = method;
}
public void setConfig(Configuration config) {
this.config = config;
}
public MethodMetricsHandler getHandler(String identifier) throws ExecutionException {
return handlers.get(identifier);
}
I am using this class as follows to cache the MethodMetricsHandler
...
private static MethodMetricsHandlerCache methodMetricsHandlerCache = new MethodMetricsHandlerCache();
...
MethodMetricsHandler handler = getMethodMetricsHandler(targetClass, method, config);
private MethodMetricsHandler getMethodMetricsHandler(Object targetClass, Method method, Configuration config) throws ExecutionException {
String identifier = targetClass.getClass().getCanonicalName() + "." + method.getName();
methodMetricsHandlerCache.setTargetClass(targetClass);
methodMetricsHandlerCache.setMethod(method);
methodMetricsHandlerCache.setConfig(config);
return methodMetricsHandlerCache.getHandler(identifier);
}
My question:
Is this creating a cache of the MethodMetricHandler classes keyed on identifier (not used this before so just a sanity check).
Also is there a better approach? Given that I will have multiple instances (hundreds) of the same MethodMetricHandler for a given identifier if I do not cache?
Yes, it does create a cache of MethodMetricsHandler objects. This approach generally is not bad however I might be able to say more if you described your use case because this solution is quite unusual. You've partially reinvented factory pattern.
Also think about some suggestions:
It's very odd that you need to call 3 setters before running getHandler
As "Configuration" is not in the key, you'll get the same object from cache for different configurations and the same targetClass and method
Why targetClass is an Object. You may want to pass Class<?> instead.
Are you planning to evict objects from cache?
I have an RMI's remote interface:
public interface JMXManager extends Remote {
public MFSMBeanServerConnection getMBeanServerConnection(String className)
throws RemoteException;
}
}
MFSMBeanServerConnection and MFSMBeanServerConnectionImpl that I created in order to serialize MBeanServerConnection:
public interface MFSMBeanServerConnection extends Serializable {
public MBeanServerConnection getMBeanServerConnection();
}
public class MFSMBeanServerConnectionImpl implements MFSMBeanServerConnection {
private static final long serialVersionUID = 1006978249744538366L;
/**
* #serial
*/
private MBeanServerConnection mBeanServerConnection;
public MFSMBeanServerConnectionImpl() {}
public MFSMBeanServerConnectionImpl(MBeanServerConnection mBeanServerConnection) {
this.mBeanServerConnection = mBeanServerConnection;
}
public MBeanServerConnection getMBeanServerConnection() {
return mBeanServerConnection;
}
private void readObject(ObjectInputStream aInputStream) throws ClassNotFoundException,
IOException {
aInputStream.defaultReadObject();
mBeanServerConnection = (MBeanServerConnection) aInputStream.readObject();
}
private void writeObject(ObjectOutputStream aOutputStream) throws IOException {
aOutputStream.defaultWriteObject();
aOutputStream.writeObject(mBeanServerConnection);
}
private void readObjectNoData() throws ObjectStreamException {
}
}
On the client side I have
JMXManager jmxm= (JMXManager) registry.lookup("JMXManager");
MFSMBeanServerConnection mfsMbsc = jmxm.getMBeanServerConnection(className);
on the second line I get an Exception:
java.rmi.UnmarshalException: error unmarshalling return; nested exception is:
java.io.WriteAbortedException: writing aborted; java.io.NotSerializableException: javax.management.remote.rmi.RMIConnector$RemoteMBeanServerConnection
at sun.rmi.server.UnicastRef.invoke(UnicastRef.java:173)
at java.rmi.server.RemoteObjectInvocationHandler.invokeRemoteMethod(RemoteObjectInvocationHandler.java:178)
at java.rmi.server.RemoteObjectInvocationHandler.invoke(RemoteObjectInvocationHandler.java:132)
at $Proxy0.getMBeanServerConnection(Unknown Source)
My goal is to create an RMI Server that:
is used by one or more JMX servers which store their MBeanServerConnection
a client takes one MBeanServerConnection and use (manipulate) its MBeans
What I am doing wrong?
How can I serialze javax.management.MBeanServerConnection so that I can use it with an remote interface?
I think its a bad idea to Serialize MBeanServerConnection because it is supposed to store a lot of Run Time Information/Some information which will not be available or valid when you deserialize it.
It think that is reason that all its known SubInterfaces ( MBeanServer, MBeanServerForwarder) also do not implement Serializable.