Can I un-assign (clear) all fields of an instance? - java

Is there a simple way to clear all fields of an instance from a an instance? I mean, I would like to remove all values assigned to the fields of an instance.
ADDED
From the main thread I start a window and another thread which controls state of the window (the last thread, for example, display certain panels for a certain period of time). I have a class which contains state of the window (on which stage the user is, which buttons he already clicked).
In the end, user may want to start the whole process from the beginning (it is a game). So, I decided. So, if everything is executed from the beginning, I would like to have all parameter to be clean (fresh, unassigned).
ADDED
The main thread, creates the new object which is executed in a new thread (and the old thread is finished). So, I cannot create a new object from the old thread. I just have a loop in the second thread.

I don't get it. How can you programmatically decide how to clear various fields?
For normal attributes it can be easy (var = null) but what about composite things or collection? Should it be collection = null, or collection.removeAll()?
This question is looking for synctactic sugar that wouldn't make so much sense..
The best way is to write out your own reset() method to customize the behaviour for every single object.. maybe you can patternize it using an
interface Resettable
{
void reset()
}
but nothing more than that..

Is there a simple way to clear all fields of an instance from a an instance? I mean, I would like to remove all values assigned to the fields of an instance.
Yes, just assign a default value to each one of them. It would take you about 20-30 mins. and will run well forever*( YMMV)
Create a method: reset and invoke it
class YourClass {
int a;
int b;
boolean c;
double d;
String f;
// and so on...
public void method1(){}
public void method2(){}
public void method3(){}
// etc.
// Magic method, reset all the attributes of your instance...
public void reset(){
a = 0;
b = 0;
c = false;
d = 0.0;
f = "";
}
}
And then just invoke it in your code:
....
YourClass object = new YourClass();
Thread thread = YourSpecificNewThread( object );
thread.start();
... // Later on you decide you have to reset the object just call your method:
object.reset(); // like new
I don't really see where's the problem with this approach.

You may use reflection:
Try something like this:
Field[] fields = object.getClass().getDeclaredFields();
for (Field f : fields) {
f.setAccessible(true);
f.set(object, null);
}
It's not a beautifull solution, but may work for you.

There is no other way than setting null to all of them.
As an aside, i find that a particular weird idea. You would have better re-creating a new instance, instead of trying to reset your old one.

If you want to clear a filter (Serializable) that your application "can handle his null" fields, you can use BeanUtils (Apache Commons):
Field[] fields = filter.getClass().getDeclaredFields();
for (Field f : fields) {
if (f.getName().endsWith("serialVersionUID")) {
continue;
}
try {
BeanUtils.setProperty(filter, f.getName(), null);
} catch (IllegalAccessException | InvocationTargetException e) {
FacesUtils.handleError(LOG, "Erro limpar filtro...", e);
}
}
I hope it can help you.

Related

Can ChronicleQueue tailers for two different queues be interleaved?

I have two separate ChronicleQueues that were created by independent threads that monitor web socket streams in a Java application. When I read each queue independently in a separate single-thread program, I can traverse each entire queue as expected - using the following minimal code:
final ExcerptTailer queue1Tailer = queue1.createTailer();
final ExcerptTailer queue2Tailer = queue2.createTailer();
while (true)
{
try( final DocumentContext context = queue1Tailer.readingDocument() )
{
if ( isNull(context.wire()) )
break;
counter1++;
queue1Data = context.wire()
.bytes()
.readObject(Queue1Data.class);
queue1Writer.write(String.format("%d\t%d\t%d%n", counter1, queue1Data.getEventTime(), queue1Data.getEventContent()));
}
}
while (true)
{
try( final DocumentContext context = queue2Tailer.readingDocument() )
{
if ( isNull(context.wire()) )
break;
counter2++;
queue2Data = context.wire()
.bytes()
.readObject(Queue2Data.class);
queue2Writer.write(String.format("%d\t%d\t%d%n", counter2, queue2Data.getEventTime(), queue2Data.getEventContent()));
}
}
In the above, I am able to read all the Queue1Data objects, then all the Queue2Data objects and access values as expected. However, when I try to interleave reading the queues (read an object from one queue, based on a property of Queue1Data object (a time stamp), read Queue2Data objects until the first object that is after the time stamp (the limit variable below), of the active Queue1Data object is found - then do something with it) after only one object from the queue2Tailer is read, an exception is thrown .DecoratedBufferUnderflowException: readCheckOffset0 failed. The simplified code that fails is below (I have tried putting the outer while(true) loop inside and outside the the queue2Tailer try block):
final ExcerptTailer queue1Tailer = queue1Queue.createTailer("label1");
try( final DocumentContext queue1Context = queue1Tailer.readingDocument() )
{
final ExcerptTailer queue2Tailer = queue2Queue.createTailer("label2");
while (true)
{
try( final DocumentContext queue2Context = queue2Tailer.readingDocument() )
{
if ( isNull(queue2Context.wire()) )
{
terminate = true;
break;
}
queue2Data = queue2Context.wire()
.bytes()
.readObject(Queue2Data.class);
while(true)
{
queue1Data = queue1Context.wire()
.bytes()
.readObject(Queue1Data.class); // first read succeeds
if (queue1Data.getFieldValue() > limit) // if this fails the inner loop continues
{ // but the second read fails
// cache a value
break;
}
}
// continue working with queu2Data object and cached values
} // end try block for queue2 tailer
} // end outer while loop
} // end outer try block for queue1 tailer
I have tried as above, and also with both Tailers created at the beginning of the function which does the processing (a private function executed when a button is clicked in a relatively simple Java application). Basically I took the loop which worked independently, and put it inside another loop in the function, expecting no problems. I thinking I am missing something crucial in how tailers are positioned and used to read objects, but I cannot figure out what it is - since the same basic code works when reading queues independently. The use of isNull(context.wire()) to determine when there are no more objects in a queue I got from one of the examples, though I am not sure this is the proper way to determine when there are no more objects in a queue when processing the queue sequentially.
Any suggestions would be appreciated.
You're not writing it correctly in the first instance.
Now, there's hardcore way of achieving what you are trying to achieve (that is, do everything explicitly, on lower level), and use MethodReader/MethodWriter magic rovided by Chronicle.
Hardcore way
Writing
// write first event type
try (DocumentContext dc = queueAppender.writingDocument()) {
dc.wire().writeEventName("first").text("Hello first");
}
// write second event type
try (DocumentContext dc = queueAppender.writingDocument()) {
dc.wire().writeEventName("second").text("Hello second");
}
This will write different types of messages into the same queue, and you will be able to easily distinguish those when reading.
Reading
StringBuilder reusable = new StringBuilder();
while (true) {
try (DocumentContext dc = tailer.readingDocument()) {
if (!dc.isPresent) {
continue;
}
dc.wire().readEventName(reusable);
if ("first".contentEquals(reusable)) {
// handle first
} else if ("second".contentEquals(reusable)) {
// handle second
}
// optionally handle other events
}
}
The Chronicle Way (aka Peter's magic)
This works with any marshallable types, as well as any primitive types and CharSequence subclasses (i.e. Strings), and Bytes. For more details have a read of MethodReader/MethodWriter documentation.
Suppose you have some data classes:
public class FirstDataType implements Marshallable { // alternatively - extends SelfDescribingMarshallable
// data fields...
}
public class SecondDataType implements Marshallable { // alternatively - extends SelfDescribingMarshallable
// data fields...
}
Then, to write those data classes to the queue, you just need to define the interface, like this:
interface EventHandler {
void first(FirstDataType first);
void second(SecondDataType second);
}
Writing
Then, writing data is as simple as:
final EventHandler writer = appender.methodWriterBuilder(EventHandler).get();
// assuming firstDatum and secondDatum are created earlier
writer.first(firstDatum);
writer.second(secondDatum);
What this does is the same as in the hardcore section - it writes event name (which is taken from the method name in method writer, i.e. "first" or "second" correspondingly), and then the actual data object.
Reading
Now, to read those events from the queue, you need to provide an implementation of the above interface, that will handle corresponding event types, e.g.:
// you implement this to read data from the queue
private class MyEventHandler implements EventHandler {
public void first(FirstDataType first) {
// handle first type of events
}
public void second(SecondDataType second) {
// handle second type of events
}
}
And then you read as follows:
EventHandler handler = new MyEventHandler();
MethodReader reader = tailer.methodReader(handler);
while (true) {
reader.readOne(); // readOne returns boolean value which can be used to determine if there's no more data, and pause if appropriate
}
Misc
You don't have to use the same interface for reading and writing. In case you want to only read events of second type, you can define another interface:
interface OnlySecond {
void second(SecondDataType second);
}
Now, if you create a handler implementing this interface and give it to tailer#methodReader() call, the readOne() calls will only process events of second type while skipping all others.
This also works for MethodWriters, i.e. if you have several processes writing different types of data and one process consuming all that data, it is not uncommon to define multiple interfaces for writing data and then single interface extending all others for reading, e.g.:
interface FirstOut {
void first(String first);
}
interface SecondOut {
void second(long second);
}
interface ThirdOut {
void third(ThirdDataType third);
}
interface AllIn extends FirstOut, SecondOut, ThirdOut {
}
(I deliberately used different data types for method parameters to show how it is possible to use various types)
With further testing, I have found that nested loops to read multiple queues which contain data in different POJO classes is possible. The problem with the code in the above question is that queue1Context is obtained once, OUTSIDE the loop that I expected to read queue1Data objects. My fundamental misconception was that DocumentContext objects managed stepping through objects in a queue, whereas actually ExcerptTailer objects manage stepping (maintaining indices) when reading a queue sequentially.
In case it might help someone else just getting started with ChronicleQueues, the inner loop in the original question should be:
while(true)
{
try (final DocumentContext queue1Context = queue1Tailer() )
{
queue1Data = queue1Context.wire()
.bytes()
.readObject(Queue1Data.class); // first read succeeds
if (queue1Data.getFieldValue() > limit) // if this fails the inner loop continues as expected
{ // and second and subsequent reads now succeed
// cache a value
break;
}
}
}
And of course the outer-most try block containing queue1Context (in the original code) should be removed.

How does this asynchronous call work in my example

I learn Java and wonder if the item in this code line:
useResult(result, item);
Will be overrwritten by the next call coming from the
doItem(item);
Here´s the eaxmple:
public void doSomeStuff() {
// List with 100 items
for (Item item : list) {
doItem(item);
}
}
private void doItem(final Item item) {
someAsyncCall(item, new SomeCallback() {
#Override
public void onSuccess(final Result result) {
useResult(result, item);
}
});
}
the SomeCallback() happens some time in the future and it´s another thread
I mean will the useResult(result, item); item be the same when callback return?
Please advice what happens here?
I mean will the useResult(result, item); item be the same when callback return?
Of course it will, what would the utility of that be otherwise?
What you are doing is creating 100 different SomeCallback classes, that will process a different Item object.
A skeleton for your someAsyncCall may look like this:
public static void someAsyncCall(Item i, Callback callback) {
CompletableFuture.runAsync( () -> { // new thread
Result result = computeResult(i);
callback.onSuccess(result, i);
});
}
The point is: Callback, at the moment of instantiation, doesn't know anything about the Item he will get as parameter. He will only know it, when Callback::onSuccess is executed in the future.
So, will Item i change (be assigned a new object) ?
No, because it is effectively final within someAsyncCall (the object value is not explicitly changed).
You can't even assign i = new Item(), as the compiler will complain about the anonymous function accessing a non-final variable.
You could of course create a new Item and pass it to the callback
Item i2 = new Item();
callback.onSuccess(result, i2);
but then it would become one hell of a nasty library...
Nobody forbids you to do i.setText("bla") though, unless your Result class is immutable (the member fields are final themselves).
EDIT
If your questions is how java handles object in method parameters, then the answer is: yes, they are a just copy of the original instances.
You could try with a simple swap method void swap(Item i1, Item 12); and you'll notice the references are effectively swapped only within function, but as soon as you return the objects will point respectively to their original instances.
But it's a copy that reflects the original instance.
Coming back to your example. Imagine your someAsyncCall waits 10000 ms before executing the callback.
in your for loop, after you call doItem, you also do: item.setText("bla");.
When you print item.getName() within useResult you will get bla. Even though the text was changed after the async function was called.

Java: Why do we us property change listeners instead of direct pointing to single value?

The idea of listener is, that of some field(or attribute) of an object changes, all "registered" listeners will be "notified" and copy the new value to their fields.
However, is it not an unnecessary complicated and redundant design?
Here is a simple proposal to solve the problem with direct value pointing and in-boxing if need.
For example, in case of string field:
public class StringBox {
protected String value;
public String getValue() {
return value;
}
public void setValueAtomic(String value) {
this.value = value;
}
public StringBox(String value) {
this.value = value;
}
public StringBox() {
}
#Override
public String toString() {
return "StringBox{" + "value=" + value + '}';
}
}
public class StringListenerReplacer {
protected StringBox field;
public StringListenerReplacer(StringBox field) {
this.field = field;
}
public void setField(String field){
this.field.setValueAtomic(field);
}
public String getField(){
return this.field.getValue();
}
}
public class DemoMain {
public static void main(String[] args) {
StringBox field = new StringBox("DemoMain");
StringListenerReplacer s0 = new StringListenerReplacer(field);
StringListenerReplacer s1 = new StringListenerReplacer(field);
StringListenerReplacer s2 = new StringListenerReplacer(field);
StringListenerReplacer s3 = new StringListenerReplacer(field);
StringListenerReplacer s4 = new StringListenerReplacer(field);
StringListenerReplacer s5 = new StringListenerReplacer(field);
System.out.println("Now, it is quasi a listener set to the string field value, so if one object change it, it changes for all");
System.out.println("s0.getField() = " + s0.getField());
System.out.println("Now, for example s4 changes field");
s4.setField("another value");
System.out.println("s0.getField() = " + s0.getField());
System.out.println("s1.getField() = " + s1.getField());
System.out.println("s2.getField() = " + s2.getField());
System.out.println("s3.getField() = " + s3.getField());
System.out.println("s4.getField() = " + s4.getField());
System.out.println("s5.getField() = " + s5.getField());
}
}
With listener design, not talking about it would look syntactically weird,
each of this object would have a reference to another one (6*5 = 30 references), 6 copies of the string field and with each change
or the field, a firePropertyChange with 5 calls in the loop over all listeners will be called.
Now i understand, why for example Eclipse or Netbeans IDE are extremely slow on weak laptops, working from the akku.
So the question is, why do people do use listeners overall in programming?
The idea of listener is, that of some field(or attribute) of an object changes, all "registered" listeners will be "notified" and copy the new value to their fields.
I think the premise of this is wrong. I'm a fan of using listeners, the observer pattern in general, but have never used them in that way.
You might want to execute an action when a button is clicked. For instance printing some text to the console. Rather than constantly polling the button for changes, with every listener, it's much more efficient if the button calls the listeners instead.
Even if you have a reference to the variable isClicked which is true during the frame the button was clicked, you would constantly have to check if(isClicked) {...}, with every listener.
One big disadvantage I can see with your design is that it is necessary to have access to the StringListenerReplacer object in order to change a value and have it "listen" for the change.
What happens if I have dozens of different, barely-related classes or logical code blocks which may or may not affect the object in question? The listener pattern allows for a listener to be notified of a change no matter what state the program is in or what has access to it.
It's also much more useful for multithreading, wherein we might synchronize listeners across threads to wait for value changes or even remote calls. Those threads can be in completely different states or executing blocks of code and the listener will always be properly notified.
The official name of the listener design is called observer. You are correct that setting up listeners does add code and complexity. As such you would not use listeners for simple POJO's. You would use it mostly for events and queues.
A video game is a good example of where you would use the observable pattern, but there are countless. I worked on a game where there were events all over the place. One of the events was a pause event. At first I would manually notify all the objects that needed to know about the pause events, every enemy and projectile and the music etc.. I kept doing this because of the costs and complexity of adding the observable pattern to this part of the game. But after a while it became too difficult to notify all the objects manually as the list grew. So I bit the bullet and implemented the observable pattern. This had an overall affect of reducing complexity in my code. With the observable pattern, any object that needed to be notified about the pause event would take it upon itself to register itself to be notified when that event occurred.
Events and queues might be a little obvious. Let me give you a more mundane example, also involving the same video game. The game was coded in JavaFX, which already has observable patterns built into their classes. I registered listeners to be notified when objects on the screen changed. For example I recall having an object X on the screen whose placement depended on the placement of another object Y. So I setup a listener so that when the coordinates of object Y changed, I would rearrange object X on the screen. This example shows how it could be useful even for properties of a class.

Instantiate class dynamically based on some constant in Java

I am making a multiplayer game which makes heavy use of a serialisable Event class to send messages over a network. I want to be able to reconstruct the appropriate subclass of Event based on a constant.
So far I have opted for the following solution:
public class EventFactory {
public static Event getEvent(int eventId, ByteBuffer buf) {
switch (eventId){
case Event.ID_A:
return EventA.deserialise(buf);
case Event.ID_B:
return EventB.deserialise(buf);
case Event.ID_C:
return EventC.deserialise(buf);
default:
// Unknown Event ID
return null;
}
}
}
However, this strikes me as being very verbose and involves adding a new 'case' statement every time I create a new Event type.
I am aware of 2 other ways of accomplishing this, but neither seems better*:
Create a mapping of constants -> Event subclasses, and use clazz.newInstance() to instantiate them (using an empty constructor), followed by clazz.initialiase(buf) to supply the necessary parameters.
Create a mapping of constants -> Event subclasses, and use reflection to find and call the right method in the appropriate class.
Is there a better approach than the one I am using? Am I perhaps unwise to disregard the alternatives mentioned above?
*NOTE: in this case better means simpler / cleaner but without compromising too much on speed.
You can just use a HashMap<Integer,Event> to get the correct Event for the eventID. Adding or removing events is going to be easy, and as the code grows this is easy to maintain when compared to switch case solution and speed wise also this should be faster than switch case solution.
static
{
HashMap<Integer,Event> eventHandlerMap = new HashMap<>();
eventHandlerMap.put(eventId_A, new EventHandlerA());
eventHandlerMap.put(eventId_B, new EventHandlerB());
............
}
Instead of your switch statement Now you can just use :
Event event = eventHandlerMap.get(eventId);
if(event!=null){
event.deserialise(buf);
}
If you're not afraid of reflection, you could use:
private static final Map<Integer, Method> EVENTID_METHOD_MAP = new LinkedHashMap<>();
static {
try {
for (Field field : Event.class.getFields())
if (field.getName().startsWith("ID_")) {
String classSuffix = field.getName().substring(3);
Class<?> cls = Class.forName("Event" + classSuffix);
Method method = cls.getMethod("deserialize", ByteBuffer.class);
EVENTID_METHOD_MAP.put(field.getInt(null), method);
}
} catch (IllegalAccessException|ClassNotFoundException|NoSuchMethodException e) {
throw new ExceptionInInitializerError(e);
}
}
public static Event getEvent(int eventId, ByteBuffer buf)
throws InvocationTargetException, IllegalAccessException {
return (Event) EVENTID_METHOD_MAP.get(eventId).invoke(null, buf);
}
This solution requires that int ID_N always maps to class EventN, where N can be any String where all characters return true for the method java.lang.Character.isJavaIdentifierPart(c). Also, class EventN must define a static method called deserialize with one ByteBuffer argument that returns an Event.
You could also check if field is static before trying to get its field value. I just forget how to do that at the moment.

ThreadLocal value access across different threads

Given that a ThreadLocal variable holds different values for different threads, is it possible to access the value of one ThreadLocal variable from another thread?
I.e. in the example code below, is it possible in t1 to read the value of TLocWrapper.tlint from t2?
public class Example
{
public static void main (String[] args)
{
Tex t1 = new Tex("t1"), t2 = new Tex("t2");
new Thread(t1).start();
try
{
Thread.sleep(100);
}
catch (InterruptedException e)
{}
new Thread(t2).start();
try
{
Thread.sleep(1000);
}
catch (InterruptedException e)
{}
t1.kill = true;
t2.kill = true;
}
private static class Tex implements Runnable
{
final String name;
Tex (String name)
{
this.name = name;
}
public boolean kill = false;
public void run ()
{
TLocWrapper.get().tlint.set(System.currentTimeMillis());
while (!kill)
{
// read value of tlint from TLocWrapper
System.out.println(name + ": " + TLocWrapper.get().tlint.get());
}
}
}
}
class TLocWrapper
{
public ThreadLocal<Long> tlint = new ThreadLocal<Long>();
static final TLocWrapper self = new TLocWrapper();
static TLocWrapper get ()
{
return self;
}
private TLocWrapper () {}
}
As Peter says, this isn't possible. If you want this sort of functionality, then conceptually what you really want is just a standard Map<Thread, Long> - where most operations will be done with a key of Thread.currentThread(), but you can pass in other threads if you wish.
However, this likely isn't a great idea. For one, holding a reference to moribund threads is going to mess up GC, so you'd have to go through the extra hoop of making the key type WeakReference<Thread> instead. And I'm not convinced that a Thread is a great Map key anyway.
So once you go beyond the convenience of the baked-in ThreadLocal, perhaps it's worth questioning whether using a Thread object as the key is the best option? It might be better to give each threads unique IDs (Strings or ints, if they don't already have natural keys that make more sense), and simply use these to key the map off. I realise your example is contrived, but you could do the same thing with a Map<String, Long> and using keys of "t1" and "t2".
It would also arguably be clearer since a Map represents how you're actually using the data structure; ThreadLocals are more like scalar variables with a bit of access-control magic than a collection, so even if it were possible to use them as you want it would likely be more confusing for other people looking at your code.
Based on the answer of Andrzej Doyle here a full working solution:
ThreadLocal<String> threadLocal = new ThreadLocal<String>();
threadLocal.set("Test"); // do this in otherThread
Thread otherThread = Thread.currentThread(); // get a reference to the otherThread somehow (this is just for demo)
Field field = Thread.class.getDeclaredField("threadLocals");
field.setAccessible(true);
Object map = field.get(otherThread);
Method method = Class.forName("java.lang.ThreadLocal$ThreadLocalMap").getDeclaredMethod("getEntry", ThreadLocal.class);
method.setAccessible(true);
WeakReference entry = (WeakReference) method.invoke(map, threadLocal);
Field valueField = Class.forName("java.lang.ThreadLocal$ThreadLocalMap$Entry").getDeclaredField("value");
valueField.setAccessible(true);
Object value = valueField.get(entry);
System.out.println("value: " + value); // prints: "value: Test"
All the previous comments still apply of course - it's not safe!
But for debugging purposes it might be just what you need - I use it that way.
I wanted to see what was in ThreadLocal storage, so I extended the above example to show me. Also handy for debugging.
Field field = Thread.class.getDeclaredField("threadLocals");
field.setAccessible(true);
Object map = field.get(Thread.currentThread());
Field table = Class.forName("java.lang.ThreadLocal$ThreadLocalMap").getDeclaredField("table");
table.setAccessible(true);
Object tbl = table.get(map);
int length = Array.getLength(tbl);
for(int i = 0; i < length; i++) {
Object entry = Array.get(tbl, i);
Object value = null;
String valueClass = null;
if(entry != null) {
Field valueField = Class.forName("java.lang.ThreadLocal$ThreadLocalMap$Entry").getDeclaredField("value");
valueField.setAccessible(true);
value = valueField.get(entry);
if(value != null) {
valueClass = value.getClass().getName();
}
Logger.getRootLogger().info("[" + i + "] type[" + valueClass + "] " + value);
}
}
It only possible if you place the same value in a field which is not ThreadLocal and access that instead. A ThreadLocal by definition is only local to that thread.
ThreadLocalMap CAN be access via Reflection and Thread.class.getDeclaredField("threadLocals") setAccssible(true), and so on.
Do not do that, though. The map is expected to be accessed by the owning thread only and accessing any value of a ThreadLocal is a potential data race.
However, if you can live w/ the said data races, or just avoid them (way better idea). Here is the simplest solution. Extend Thread and define whatever you need there, that's it:
ThreadX extends Thread{
int extraField1;
String blah2; //and so on
}
That's a decent solution that doesn't relies on WeakReferences but requires that you create the threads. You can set like that ((ThreadX)Thread.currentThread()).extraField1=22
Make sure you do no exhibit data races while accessing the fields. So you might need volatile, synchronized and so on.
Overall Map is a terribad idea, never keep references to object you do not manage/own explicitly; especially when it comes to Thread, ThreadGroup, Class, ClassLoader... WeakHashMap<Thread, Object> is slightly better, however you need to access it exclusively (i.e. under lock) which might damper the performance in heavily multithreaded environment. WeakHashMap is not the fastest thing in the world.
ConcurrentMap, Object> would be better but you need a WeakRef that has equals and hashCode...

Categories

Resources