Interacting with the smart contract from Java application (We3j) - java

I have many questions about the workflow ( sequence ) of interacting with the smart contract from Java application, so I will first explain what I have done and then put my questions, and if something wrong in my understanding please let me know.
1- I have written smart contract
2- Use truffle to get the smart contract java wrapper.(contract.java)
3- Use testrpc to test the contract
I have 2 class uses testrpc accounts (credentials) to interact with the smart contract and call its functions
Each class (node1.java, node2.java) call a function in the smart contract called (send) to send their data to the chain.
I have added an event which trigger if the 2 nodes have sent there data
What I don't understand is, how I can let the java code ( Let say MainProgram.class) always check for that event. Because I need to check if both nodes send their data, then I will call another function to analyse this data.
How I can manage, control and check what transactions have been done or not, I mean how I can use the events in java code and let the code run forever and check if the this event happen, do action.
Hope I can explain what I need clearly
Thank you in advance.

My answer to one of your previous questions applies here. Yes, you will probably want to setup a dedicated process to listen for events. But, you don't need an account or even be the owner or client of a smart contract to listen to events on the public blockchain (This is why it's considered "public").
To listen to events, all you need are the contract ABI and the contract's address. Both should be easy for you to get. You can set up a listener on all of the events this contract emits. From the web3j documentation:
You use the EthFilter type to specify the topics that you wish to apply to the filter. This can include the address of the smart contract you wish to apply the filter to. You can also provide specific topics to filter on. Where the individual topics represent indexed parameters on the smart contract:
EthFilter filter = new EthFilter(DefaultBlockParameterName.EARLIEST,
DefaultBlockParameterName.LATEST, <contract-address>)
[.addSingleTopic(...) | .addOptionalTopics(..., ...) | ...];
This filter can then be created using a similar syntax to the block and transaction filters above:
web3j.ethLogObservable(filter).subscribe(log -> {
...
});
Specifying the block parameters let you decide how far back in the history you want to start processing events. Keep in mind that in Ethereum, events are actually logs on the blockchain. When you listen for events, you're actually looking for activity in the logs as blocks are added to the chain. Therefore, you can go as far back in history as you want and look at older blocks to process old events as well.
To listen to all events in your contract, you would just do:
EthFilter filter = new EthFilter(DefaultBlockParameterName.EARLIEST, DefaultBlockParameterName.LATEST, <CONTRACT_ADDRESS>);
web3.ethLogObservable(filter).subscribe(log -> System.out.println(log.toString());
IMPORTANT NOTE ON THE CONTRACT ADDRESS WITH ETHFILTER - There is a bug in Web3j for the contract address in EthFilter. The API doesn't like the leading '0x' in the contract address. To get around this, if you're using the contract object, send the contract address in using contract.getContractAddress().substring(2);
If you're interested in specific events, you need to add a topic to your filter. The example below will listen for all MyEvent events thrown which contains a single indexed address parameter and two non-indexed uint256 params:
Web3j web3j = Web3j.build(new HttpService());
Event event = new Event("MyEvent",
Arrays.<TypeReference<?>>asList(new TypeReference<Address>() {}),
Arrays.<TypeReference<?>>asList(new TypeReference<Uint256>() {}, new TypeReference<Uint256>() {}));
EthFilter filter = new EthFilter(DefaultBlockParameterName.EARLIEST, DefaultBlockParameterName.LATEST, <CONTRACT_ADDRESS>);
filter.addSingleTopic(EventEncoder.encode(event));
web3j.ethLogObservable(filter).subscribe(log -> System.out.println(log.toString()));
The above can run in any server process. Within the server process, you can connect to the network either through a local node (the default localhost when using new HttpService()). Or, you can sign up with Infura, create an API key, and use and use their node cluster (example Ropsten URL: new HttpService("https://ropsten.infura.io/<YOUR_API_KEY>");)

Related

Port & Adapter Pattern - invoke multiple implementation of same output port

I have some doubt about how implement a simple scenario for a my style exercises following the Hexagonal architecture or Port & Adapter Pattern.
I have a UseCase (or service) that has to send a Contact (received from a contact form) to an external API service.
public class SendContactUseCase{
//output port interface
private final SendContact sendContact;
public ServiceLayer(SendContact sendContact) {
this.sendContact = sendContact;
}
public void sendContact(ContactRequest req){
sendContact.send(req);
}
}
So I have create an output port with a method send(ContactRequest req) after that I have implemented this interface as a driven adapter where I put the codes for the communication with the API
//driven adapter
public class SendContactToAPIAdapter implements SendContact {
//private final PossibleAPILib possibleAPIlib...;
#Override
public boolean send(ContactRequest req) {
//HERE the code to communicate with API
}
}
Here my doubt, what if at a later moment comes the need to send the "Contact" also to another channel, for example as XML attachment to a specific email address.
If I have understood correctly the Port & Adapter pattern to hide the infrastructure "logic" to the UseCase I should implement a new driven adapter from the same Port in order to inject the correct adapter in the UseCase.
public class SendContactXMLAdapter implements SendContact
But if I'd should invoke both the adapater? Beacause I have to send the contact to the both systems?
Should I to create a third adapter where hide the logic to call both systems?
I hope I've been clear
Thanks folks
Create a sample java project for a simple use case
The answer depends on what you actually want to accomplish.
To me, your anticipation of the 'need to send the "Contact" also to another channel' sounds like an additional feature of your business logic.
Based on that assumption, I would implement two separate port/adapter pairs, one for the API and one for XML. Your use case can then decide which ports to call.
This diagram visualizes what I mean: Component Architecture
If you really always want to send the contact to the API and export it as XML you could leave your use-case untouched and just extend the implementation of your SendContactToAPIAdapter to do both. But you should then also consider renaming it.
public interface ContactRegisterer{
public void registerContact(Contact contactToRegister) throws ContactRegisterException;
}
you could use this interface[represents a port which is used by more higher level logic of your application] so that the location of where to save the contact and how to save it could be changed independently from the client. This interface could be used to register contact to a single but different destinations, or to multiple and different destination. For instance, you could register to a file, a database, a service or all of them.
When you register to a single destination, all you need is to give a different implementation of the interface for registering to that destination.
When you register to multiple destinations, you could use Decorator design pattern. (Create a wrapper that implements the interface and also contains different objects implementing the same interface. The wrapper is given to the client(have the same interface, hence the client is not aware of the fact that it is given a wrapper), the wrapper in turn could use multiple threads for registering to each destination or use a for loop within a thread)
Another alternative is to use A Modified Version Of Chain of Responsibility pattern. In this pattern, you chain multiple(open ended) instances of the interface and every instance call the next after it is done registering to its destination.
It depends if the change is a technical change or a business logic change.
The ports and adapters architecture allows you to change the adapters without changing the Core of the application. If you have tests that validate your business logic, as you should, you'll see that you can swap adapters, everything will work fine and you won't have to change your Core tests. In this case, you can have the two adapters and configure one or the other based on some setting, or as you say, have an adapter that basically calls the other two.
On the other hand, if your business logic requirements change (depending on some business rule you have to send the contact to A or to B), you'll notice that you'll have to change your tests to verify the new requirements. In that case, your Core will define a different Port that will be more suited to the new business logic.

Restful way of sending Pre-Post or Pre-Put warning message

I am trying to convert a legacy application to Restful web-services. One of our old form displayed a warning message immediately on form load. This warning message depends on a user property.
For example, if there is a property isInactiveByDefault which when set to "true" will set the status of the newly created employee via POST v1/employees to "Inactive". User on "Employee" form load will see a warning message "Any new employee created will have inactive status".
I originally thought of providing a resource to get the status of the property and let the client handle whether to display the warning message or not based on the value of the property. But my manager wants to avoid any business logic in client side. As per him, this is a business logic and should be handled from server.
I am wondering what's the Restful way of sending this warning message. I could send the message with POST response but this would mean they will receive the message after the action.
I think that delegating the choice to display or not the message to the client is at the very frontier of what one may call "business logic". I believe there is room for debate here..
But let aside this point, and if we consider the message(s) to display as data then as soon as your client can make REST calls (via JS, AJAX or whatever), you should be able to query the server before or while the form loads (and wait/sync on that).
So it is perfectly fine and "RESTful" to perform a GET request on the service to i.e. retrieve a list of warning messages (eventually internationalized) and display them. The server will prepare this list of warnings based on such or such property and return it (as JSON or whatever). Your client will only have to display the parsed result of the request (0-N messages to list) next to your form.
EDIT
Example REST service for your use case:
#POST
#Path("/v1/employees")
public Response someFormHandlingMethod() {
// Your form processing
...
if(EmployeeParameters.isInactiveByDefault()) {
emp.setInactive(true);
}
...
}
#GET
#Path("/v1/employees/formwarnings")
public Response getEmployeeFormWarnings() {
// "Business logic" -> Determine warning messages to return to client
...
if(EmployeeParameters.isInactiveByDefault()) {
warnings.add("Any new employee created will have inactive status");
}
...
// and return them in Response (entity as JSON or whatever). Could also be List<String> for example.
}
It is just an example so that you get the idea but the actual implementation will depend on your Jersey configuration, mappers/converters, annotations (such as #Produces), etc.
The focus of a REST architecture is on the decoupling of clients from servers by defining a set of constraints both parties should adhere to. If those constraints are followed stringent it allows servers to evolve freely in future without having to risk breaking clients that also adhere to the REST architecture while such clients will get more robust toward change as a consequence. As such, a server should teach clients on how certain things need to look like (i.e. through providing forms and form-controls) and what further actions (i.e. links) the client can perform from the current state it is in.
By asking for Pre-Post or Pre-Put operations I guess you already assume an HTTP interaction, which is quite common for so called "RESTful" systems. Though as Jim Webber pointed out, HTTP is just a transport protocol whose application domain is the transfer of a document from a source to a target and any business rules we conclude from that interaction are just a side effect of the actual document management. So we have to come up with ways to trigger certain business activities based on the processing of certain documents. In addition to that, HTTP does not know anything like pre-post or pre-put or the like.
Usually a rule of thumb in distributed computing is to never trust any client input and thus validate the input on the server side again. The first validation solution worked on the server providing a client with a Web form that teaches clients on the available properties of a resource as well as gave clients a set of input controls to interact with that form (i.e. a send button that upon clicking will marshal the input data to a representation format that is sent with a HTTP operation the server supports). A client therefore did not need to know any internals of the server as it was served with all the information it needed to make up the next request. This is in essence what HATEOAS is all about. Once a server received a request on a certain endpoint it started to process the received document, i.e. the input data, and if certain constraints it put on that resource were violated it basically sent the same form as before to the client, though this time with the input data of the client included and further markup that was added to the form representation that indicates the input error. Through style attributes added to that errors these usually stood out from the general page design notably (usually red text and/or box next to the input control producing the violation) and thus allowed for easier processing of the problems.
With the introduction of JavaScript some clever people came up with a way to not only send the form to the client, but also a script that automatically checks the user input in the background and if violated adds the above mentioned failure markup dynamically to the DOM of the current form/page and so inform a user on possible problems with the current input. This however requires that the current representation format, or more formally its media-type, does support dynamic content such as scripts and manipulating the current representation format.
Unfortunately, many representation formats exchanged in typical "REST APIs/services" do not support such properties. A plain JSON document i.e. just defines the basic syntax (i.e. objects are key-value containers in between curly braces, ...) but it does not define a semantics for any of the fields it might contain, it does not even know what a URI is and such. As such, JSON is not a good representation format for in a REST architecture to start with. While HAL/JSON or JSON Hyper-Schema attempt to add support for links and rudimentary semantics, they yet create their own media-types and thus representation formats all along. Other JSON based media types, such as hal-forms, halo+json (halform) and ion, attempt to add form-support. Some of these media-type formats may allow to add restrictions to certain form-elements that automatically end up in a user warning and a block of the actual document transmission if violated, though most of the current media types might yet lack support for client-sided scripts or dynamic content updates.
Such circumstances lead Fielding in one of his famous blog posts to a statement like this:
A REST API should spend almost all of its descriptive effort in defining the media type(s) used for representing resources and driving application state, or in defining extended relation names and/or hypertext-enabled mark-up for existing standard media types
Usually the question should not be which media type to support but how many different ones your server/client should support as the more media type representation it may handle the more it will be able to interact with its peer. And ideally you want to reuse the same client for all interactions while the same server should be able to server a multitude of clients, especially ones not under your direct control.
In regards to a "RESTful" solution I would not recommend using custom headers or proprietary message formats as they are usually only understood and thus processable by a limited amount of clients, which is the opposite of what the REST architecture actually aims to achieve. Such messages usually also require a priori knowledge of the server behavior which may lead to problems if the server ever needs to change.
So, long story short, in essence the safest way to introduce "RESTful" (~ one that is in accordance with the constraints the REST architecture put in place) input validation to a resource is by using a form-based representation, that teaches a client on everything it needs to create an actual request, where the request is processed by the server and if certain input constraints are validated leads to returning the form to the client again with a hint on the respective input problem.
If you want to minimize the interaction between client and server due to performance considerations (which are usually not the core problem at hand) you might need to define your own media type that adds script support and dynamic content updates, through similar means such as DOM manipulation or the like. Such a custom media type however should be registered with IANA to allow other implementors to add support for such types and thus allow other clients to interact with your system in future.

XPages: Observer and Observable

I've been trying to use Java Observer and Observable in a multi-user XPages application, but I'm running into identity conflicts. I'll explain.
Say A and B have the same view on their screens, a list of documents with Readers fields. We want to keep those screens synchronised as much as possible. If A changes something, B might be receiving updates, depending on his rights and roles. We achieved to do this using WebSockets, but I want to see if there's a better way, i.e. without send a message to the client telling it to re-fetch the screen.
Using the Observer mechanism, B can observe changes and push the changed screen to the user. The tricky part here is that if I call notifyObservers as user A, and I walk through all the observables, A will be executing the Observer.update() method, and not B.
I also thought of using a Timer-like solution, but I'd probably end up with the same conflicts.
Question: is there any way I can properly switch sessions in XPages? Or should I wait for Publish/Subscribe in the XPages server?
I can see 3 possible actions:
Use the SudoUtils from XPages-Scaffolding to run code on behalf
Use DominoJNA to access the data with a different user id (not for the faint of heart)
Just notify the client using the websocket - preferably via webworker. It then would make a fetch (the artist formerly known as Ajax) to see if changes are needed in the client UI. While this has the disadvantage of incurring a network interlude (websocket + fetch) it has the advantage that you don't need to mess with impersonisation which always carries the risk of something going wrong.
For the first two I would want to pack them into an OSGi bundle to be independent from the particularities of Java loaded from an NSF
Old answer
Your observer needs to be in an application context, so you can update any Observer. The observer then would use a websocket to the client to tell it: update this ONE record.
The tricky part, needs planning: have individual websocket addresses, so you notify only the ones that need notification

invoke a smart contract function from java application without need to listen to events

As I understood that we have to use TransactionReceipt if we want to extract the events..
TransactionReceipt transactionReceipt = contract.someMethod(
<param1>,
...).send();
but what about for example if I have a function called "register" and need many accounts to register their self by invoking the function register.
how I can define accounts ( many credentials ) if the TransactionReceipt doesn't have parameters for ( from which account, gas limit, ..etc).
One more thing that I invoked the "register" function using TransactionReceipt as the following:
TransactionReceipt transactionReceipt = contract.register("John",BigInteger.valueOf(101)).send();
but this error appears:
Error processing transaction request: Error: Exceeds block gas limit
Thanks
As I understood that we have to use TransactionReceipt if we want to extract the events..
TransactionReceipt is not the only way to listen for events. You can also set up an Observable filter:
contract.someEventObservable(startBlock, endBlock).subscribe(event -> ...);
TransactionReceipt is a good way to get access to the events thrown for one specific transaction. All events thrown during the transaction are included in the receipt. However, if you want to process events in general across multiple transactions and/or use filters, you want to use an Observable filter. There's an entire section on event filters with examples here.
how I can define accounts ( many credentials ) if the TransactionReceipt doesn't have parameters for ( from which account, gas limit, ..etc).
If I'm understanding this question correctly, you want to know how to process the events section of the TransactionReceipt? Web3j provides a helper method in the contract instance which will process the logs from TransactionReceipt.
EventValues eventValues = contract.processEVENT_NAMEEvent(transactionReceipt);
Replace EVENT_NAME with the event type you're interested in. Any account specific information you need to identify the event you want (address, name, etc) should be included in the event itself.
EDIT: Based on your comment, it looks like I misunderstood this part of your question. I'll leave my previous answer here in case it's useful for processing events and address your question below.
After you create your contract instance (either through deploy or load), you can change the gas limit and gas price. Both have setters in the wrapper's parent class. Therefore, you can reuse the same wrapper to call different functions in your contract using the appropriate gas parameters for that particular function.
However, you cannot change the underlying Credentials (at least, not without subclassing or changing the generated wrapper). For different credentials, create different wrapper objects using .load.
but this error appears:
Error processing transaction request: Error: Exceeds block gas limit
I can't help with this without seeing the contract and code used to call the function.

Why people use message/event buses in their code? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Closed 4 years ago.
Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
I think that you have heard of message/event buses, it's the single place when all events in the system flow. Similar architectures are found in computer's motherboards and LAN networks. It's a good approach for motherboards and networks as it reduces the number of wires, but is it good for software development? We don't have such restrictions as electronics does.
The simplest implementation of message bus/event bus can be like:
class EventBus {
void addListener(EventBusListener l}{...}
void fireEvent(Event e) {...}
}
Posting events is done with bus.fireEvent(event), receiving messages is enabled by bus.addListener(listener). Such architectures are sometimes used for software development, for example MVP4G implements similar message bus for GWT.
Active projects:
Google Guava EventBus
MBassador by Benjamin Diedrichsen
Mycila PubSub by Mathieu Carbou
mvp4g Event Bus
Simple Java Event Bus
Dormant/Dead projects:
Sun/Oracle JavaBeans InfoBus
https://eventbus.dev.java.net/ [Broken link]
It's just the popular Observer (Listener) pattern made 'globally' - each object in the system can listen to each message, and I think it's bad, it breaks the Encapsulation principle (each object knows about everything) and Single Responsibility principle (eg when some object needs to a new type of message, event bus often needs to be changed for example to add a new Listener class or a new method in the Listener class).
For these reasons I think, that for most software, Observer pattern is better than event bus. What do you think about event bus, does it make any good sense for typical applications?
EDIT: I'm not talking about 'big' enterprise solutions like ESB - they can be useful (what's more ESB offers much, much more than just an event bus). I'm asking about usefulness of using message bus in 'regular' Java code for object-to-object connection - some people do it, check the links above. Event bus is probably best solution for telephone-to-telephone communication or computer-to-computer communication because each telefone (or computer) in a network can typically talk to each other, and bus reduces the number of wires. But objects rarely talk to each other - how many collaborators one object can have - 3, 5?
I am considering using a In memory Event Bus for my regular java code and my rationale is as follows
Each object in the system can listen to each message, and I think it's
bad, it breaks the Encapsulation principle (each object knows about
everything)
I am not sure if this is really true, I class needs to register with the event bus to start with, similar to observer pattern, Once a class has registered with the Event Bus, only the methods which have the appropriate signature and annotation are notified.
and Single Responsibility principle (eg when some object needs to a
new type of message, event bus often needs to be changed for example
to add a new Listener class or a new method in the Listener class).
I totally disagree with
event bus often needs to be changed
The event bus is never changed
I agree with
add a new Listener class or a new method in the Listener class
How does this break SRP ?, I can have a BookEventListener which subscribes to all events pertaining to my Book Entity, and yes I can add methods to this class but still this class is cohesive ...
Why I plan to use it ? It helps me model the "when" of my domain ....
Usually we hear some thing like send a mail "when" book is purchased
we go write down
book.purchase();
sendEmail()
Then we are told add a audit log when a book is purchased , we go to the above snippet
book.purchase();
sendEmail();
**auditBook();**
Right there OCP violated
I Prefer
book.purchase();
EventBus.raiseEvent(bookPurchasedEvent);
Then keep adding handlers as needed Open for Extension Closed for Modification
Thanks
Some people like it because it is the embodiment of the Facade pattern or Mediator pattern. It centralizes cross-cutting activities like logging, alerting, monitoring, security, etc.
Some people don't like it because it is often a Singleton point of failure. Everyone has to know about it.
I use it heavily in JavaScript. There can be so many various widgets that all need to do some sort of action whenever something else happens -- there is no real hierarchy of ownership of objects. Instead of passing references of every object to every object, or just making every object global, when something significant happens inside a particular widget, I can just publish "/thisWidget/somethingHappened" -- instead of filling that widget with all kinds of code specific to the API of other widgets. The I have a single class that contains all the "wiring", or "plubming" as they like to call it in the Java Spring framework. This class contains references to all of my widgets, and has all of the code for what happens after each various event fires.
It is centralized, easy to access and maintain, and if one thing changes or I want a new process to occur on a specific event, I don't have to search through every single class/object/widget to try to find out where something is being handled. I can just go to my "operator" class -- the one that handles all the "wiring" when a particular event happens, and see every repercussion of that event. In this system, every individual widget is completely API agnostic of the other widgets. It simply publishes what has happened to it or what it is doing.
I'm having trouble understanding what you're really asking in your question. You give an example of a simple event bus which is actually just Observable with a different name, then you say;
For these reasons I think, that for
most software, Observer pattern is
better than event bus. What do you
think about event bus, does it make
any good sense for typical
applications?
..but given your example, they are the same. This makes me wonder if you have ever used something like a Enterprise Service Bus. At a base level an ESB logically does the same thing as the observer pattern, but commercial products add much, much more. Its like an event bus on steroids. They are complicated software products and offer;
Message pickup
Generate events by listening to various endpoints. The endpoint can be a listener (such as a HTTP server), a messaging system (such as JMS), a database or pretty much anything else you want.
Message routing
Take your event and send it to one/many endpoint. Routing can be pretty smart, the bus might route the message depending on the message type, the message contents or any other criteria. Routing can be intelligent and dynamic.
Message Transformation
Transforms your message into another format, this can be as simnple as from XML to JSON or from a row on a database table to a HTTP request. Transformation can occur within the data itself, for example swapping date formats.
Data Enrichment
Adds or modifies data in your message by calling services along the way. For example if a message has a postcode in it the bus might use a postcode lookup service to add in address data.
..and lots, lots more. When you start looking into the details you can really start to see why people use these things.
Because it can be an important step in the way to decouple the application modules to a service based architecture.
So in your case if you have not the intention to decouple the modules of your application into isolated services then the native implementation of the Observer pattern will make it a simpler solution.
But If you want to build lets say a micro-services architecture the event-bus will allow to get the benefits of this architecture style so you could for instance update and deploy just some part of your application without affect others, because they are just connected through the event-bus.
So the main question here is the desired level of application components decoupling.
Some references about it:
http://microservices.io/patterns/data/event-driven-architecture.html
http://tech.grammarly.com/blog/posts/Decoupling-A-Monolithic-Server-Application.html
A good analogy is that of a telephone exchange, where every handset can dial to every other handset. A compromised handset can tune into other conversations. Program control flows like wires(cyclomatic complexity anyone!) This is similar to the requirement of having a connection/physical medium between two end points. This is So for N handsets instead of having NC2 (Combinatorial logic) flows for every new handset we tend to get N flows.
A reduction in complexity implies easy to understand code. Lets start with the prominent points you have highlighted: 1. Global knowledge 2. Intrusive modifications.
Global Knowledge: Consider message event to be an envelop. From event handler/sender perspective there is no data being exposed, it is seeing an envelop (unless an derived class tries to do some inspection using 'instanceof' checks). In a good OOP design, this would never occur.
Intrusive modifications: Instead of having a event specific listener approach, one can use a global event handling approach. As such we have a global event type (on which data is piggy backed and down-casted). This is much like the PropertyBeanSupport model in Java. With a single event type we are required to have a single sender and listener types. This implies you need not modify the bus/listeners every time you see something new. The ugly down-casting can be soothened using Adapter pattern (Please don't start that another level of redirection quote!). Programmers can write assembly in any language. So need for commonsense and smartness can not be substituted. All I intend to state is it can be a effective tool.
The actual event receivers can use the listeners (composition/proxy) easily. In such a Java code base, listeners would look like stand alone inner class declarations (with unused warning being flagged in certain IDEs). This is akeen to two players on the beach playing a ball game, the players don't react until they see the ball.
'#duffymo' points out another interesting aspect: 'Single point of failure'. This would/can in theory effect any object residing in memory(RAM) and not specific to MessageHandlers.
As a practical example, our app sync's with a web service every x number of minutes, and if any new data is received, we need to update the GUI. Now, because the SyncAdapter runs on a background thread, you can't simply point to a textview and modify its properties, you have to bubble up an event. And the only way to make sure you catch that event is if you have a shared (static,singleton) object passing that event around for subscribers to handle.

Categories

Resources