I'm looking for using MC|Brand channel on a sponge minecraft server.
When i'm trying to use :
Sponge.getChannelRegistrar().getOrCreateRaw(plugin, channel).addListener((data, connection, side) -> {
if(side == Type.CLIENT) {
// do something
}
});
I'm getting this issue:
org.spongepowered.api.network.ChannelRegistrationException: Reserved channels cannot be registered by plugins
at org.spongepowered.server.network.VanillaChannelRegistrar.validateChannel(VanillaChannelRegistrar.java:71) ~[VanillaChannelRegistrar.class:1.12.2-7.3.0]
at org.spongepowered.server.network.VanillaChannelRegistrar.createRawChannel(VanillaChannelRegistrar.java:104) ~[VanillaChannelRegistrar.class:1.12.2-7.3.0]
at org.spongepowered.api.network.ChannelRegistrar.getOrCreateRaw(ChannelRegistrar.java:122) ~[ChannelRegistrar.class:1.12.2-7.3.0]
How can I fix it, just by using channel ? Is there an event for reserved MC channel message ?
I tried to register the channel exactly as Sponge does, but without the check that create the issue.
To do it, I use Java Reflection like that :
RawDataChannel spongeChannel = null; // declare channel
try {
// firstly, try default channel registration to go faster
spongeChannel = Sponge.getChannelRegistrar().getOrCreateRaw(plugin, channel);
} catch (ChannelRegistrationException e) { // error -> can't register
try {
// load class
Class<?> vanillaRawChannelClass = Class.forName("org.spongepowered.server.network.VanillaRawDataChannel");
Class<?> vanillaChannelRegistrarClass = Class.forName("org.spongepowered.server.network.VanillaChannelRegistrar");
Class<?> vanillaBindingClass = Class.forName("org.spongepowered.server.network.VanillaChannelBinding");
// get constructor of raw channel
Constructor<?> rawChannelConstructor = vanillaRawChannelClass.getConstructor(ChannelRegistrar.class, String.class, PluginContainer.class);
spongeChannel = (RawDataChannel) rawChannelConstructor.newInstance(Sponge.getChannelRegistrar(), channel, plugin.getContainer()); // new channel instance
// now register channel
Method registerChannel = vanillaChannelRegistrarClass.getDeclaredMethod("registerChannel", vanillaBindingClass); // get the method to register
registerChannel.setAccessible(true); // it's a private method, so set as accessible
registerChannel.invoke(Sponge.getChannelRegistrar(), spongeChannel); // run channel registration
} catch (Exception exc) {
exc.printStackTrace(); // reflection failed
}
}
if(spongeChannel == null) // channel not registered
return;
// my channel is now registered, by one of both available method. That's perfect
spongeChannel.addListener((data, connection, side) -> { // my listener
if(side == Type.CLIENT) {
// do something
}
});;
If there already have an error, specially when the reflection failed, I suggest you to check for new version, maybe method have change her parameter or class have been moved.
You can find Sponge code on their github.
Related
I am trying to configure my Spring AMQP ListenerContainer to allow for a certain type of retry flow that's backwards compatible with a custom rabbit client previously used in the project I'm working on.
The protocol works as follows:
A message is received on a channel.
If processing fails the message is nacked with the republish flag set to false
A copy of the message with additional/updated headers (a retry counter) is published to the same queue
The headers are used for filtering incoming messages, but that's not important here.
I would like the behaviour to happen on an opt-in basis, so that more standardised Spring retry flows can be used in cases where compatibility with the old client isn't a concern, and the listeners should be able to work without requiring manual acking.
I have implemented a working solution, which I'll get back to below. Where I'm struggling is to publish the new message after signalling to the container that it should nack the current message, because I can't really find any good hooks after the nack or before the next message.
Reading the documentation it feels like I'm looking for something analogous to the behaviour of RepublishMessageRecoverer used as the final step of a retry interceptor. The main difference in my case is that I need to republish immediately on failure, not as a final recovery step. I tried to look at the implementation of RepublishMessageRecoverer, but the many of layers of indirection made it hard for me to understand where the republishing is triggered, and if a nack goes before that.
My working implementation looks as follows. Note that I'm using an AfterThrowsAdvice, but I think an error handler could also be used with nearly identical logic.
/*
MyConfig.class, configuring the container factory
*/
#Configuration
public class MyConfig {
#Bean
// NB: bean name is important, overwrites autoconfigured bean
public SimpleRabbitListenerContainerFactory rabbitListenerContainerFactory(
ConnectionFactory connectionFactory,
Jackson2JsonMessageConverter messageConverter,
RabbitTemplate rabbitTemplate
) {
SimpleRabbitListenerContainerFactory factory = new SimpleRabbitListenerContainerFactory();
factory.setConnectionFactory(connectionFactory);
factory.setMessageConverter(messageConverter);
// AOP
var a1 = new CustomHeaderInspectionAdvice();
var a2 = new MyThrowsAdvice(rabbitTemplate);
Advice[] adviceChain = {a1, a2};
factory.setAdviceChain(adviceChain);
return factory;
}
}
/*
MyThrowsAdvice.class, hooking into the exception flow from the listener
*/
public class MyThrowsAdvice implements ThrowsAdvice {
private static final Logger logger = LoggerFactory.getLogger(MyThrowsAdvice2.class);
private final AmqpTemplate amqpTemplate;
public MyThrowsAdvice2(AmqpTemplate amqpTemplate) {
this.amqpTemplate = amqpTemplate;
}
public void afterThrowing(Method method, Object[] args, Object target, ListenerExecutionFailedException ex) {
var message = message(args);
var cause = ex.getCause();
// opt-in to old protocol by throwing an instance of BusinessException in business logic
if (cause instanceof BusinessException) {
/*
NB: Since we want to trigger execution after the current method fails
with an exception we need to schedule it in another thread and delay
execution until the nack has happened.
*/
new Thread(() -> {
try {
Thread.sleep(1000L);
var messageProperties = message.getMessageProperties();
var count = getCount(messageProperties);
messageProperties.setHeader("xb-count", count + 1);
var routingKey = messageProperties.getReceivedRoutingKey();
var exchange = messageProperties.getReceivedExchange();
amqpTemplate.send(exchange, routingKey, message);
logger.info("Sent!");
} catch (InterruptedException e) {
logger.error("Sleep interrupted", e);
}
}).start();
// NB: Produce the desired nack.
throw new AmqpRejectAndDontRequeueException("Business logic exception, message will be re-queued with updated headers", cause);
}
}
private static long getCount(MessageProperties messageProperties) {
try {
Long c = messageProperties.getHeader("xb-count");
return c == null ? 0 : c;
} catch (Exception e) {
return 0;
}
}
private static Message message(Object[] args) {
try {
return (Message) args[1];
} catch (Exception e) {
logger.info("Bad cast parse", e);
throw new AmqpRejectAndDontRequeueException(e);
}
}
}
Now, as you can imagine, I'm not particularly pleased with the indeterminism of scheduling a new thread with a delay.
So my question is simply, is there any way I could produce a deterministic solution to my problem using the provided hooks of the ListenerContainer ?
Your current solution risks message loss; since you are publishing on a different thread after a delay. If the server crashes during that delay, the message is lost.
It would be better to publish immediately to another queue with a TTL and dead-letter configuration to republish the expired message back to the original queue.
Using the RepublishMessageRecoverer with retries set to maxattempts=1 should do what you need.
If I run following these two tests I get the error.
1st test
#Rule
public GrpcCleanupRule grpcCleanup = new GrpcCleanupRule();
#Test
public void findAll() throws Exception {
// Generate a unique in-process server name.
String serverName = InProcessServerBuilder.generateName();
// Create a server, add service, start, and register for automatic graceful shutdown.
grpcCleanup.register(InProcessServerBuilder
.forName(serverName)
.directExecutor()
.addService(new Data(mockMongoDatabase))
.build()
.start());
// Create a client channel and register for automatic graceful shutdown.
RoleServiceGrpc.RoleServiceBlockingStub stub = RoleServiceGrpc.newBlockingStub(
grpcCleanup.register(InProcessChannelBuilder
.forName(serverName)
.directExecutor()
.build()));
RoleOuter.Response response = stub.findAll(Empty.getDefaultInstance());
assertNotNull(response);
}
2nd test
#Test
public void testFindAll() {
ManagedChannel channel = ManagedChannelBuilder.forAddress("localhost", 8081)
.usePlaintext()
.build();
RoleServiceGrpc.RoleServiceBlockingStub stub = RoleServiceGrpc.newBlockingStub(channel);
RoleOuter.Response response = stub.findAll(Empty.newBuilder().build());
assertNotNull(response);
}
io.grpc.internal.ManagedChannelOrphanWrapper$ManagedChannelReference
cleanQueue SEVERE: ~~~ Channel ManagedChannelImpl{logId=1,
target=localhost:8081} was not shutdown properly!!! ~~~
Make sure to call shutdown()/shutdownNow() and wait until awaitTermination() returns true.
java.lang.RuntimeException: ManagedChannel allocation site
at io.grpc.internal.ManagedChannelOrphanWrapper$ManagedChannelReference.(ManagedChannelOrphanWrapper.java:94)
If I comment out one of them, then no errors, unit tests pass though but the exception is thrown if both are ran together.
Edit
Based on the suggestion.
#Test
public void testFindAll() {
ManagedChannel channel = ManagedChannelBuilder.forAddress("localhost", 8081)
.usePlaintext()
.build();
RoleServiceGrpc.RoleServiceBlockingStub stub = RoleServiceGrpc.newBlockingStub(channel);
RoleOuter.Response response = stub.findAll(Empty.newBuilder().build());
assertNotNull(response);
channel.shutdown();
}
Hey I just faced similar issue using Dialogflow V2 Java SDK where I received the error
Oct 19, 2019 4:12:23 PM io.grpc.internal.ManagedChannelOrphanWrapper$ManagedChannelReference cleanQueue
SEVERE: *~*~*~ Channel ManagedChannelImpl{logId=41, target=dialogflow.googleapis.com:443} was not shutdown properly!!! ~*~*~*
Make sure to call shutdown()/shutdownNow() and wait until awaitTermination() returns true.
Also, Having a huge customer base we started running into out of memory unable to create native thread error.
After performing a lot of Debugging operations and Using Visual VM Thread Monitoring I finally figured out that the problem was because of SessionsClient not closing. So I used the attached code block to solve that issue. Post testing that block I was finally able to free up all the used threads and also the error mentioned earlier was resolved.
SessionsClient sessionsClient = null;
QueryResult queryResult = null;
try {
SessionsSettings.Builder settingsBuilder = SessionsSettings.newBuilder();
SessionsSettings sessionsSettings = settingsBuilder
.setCredentialsProvider(FixedCredentialsProvider.create(credentials)).build();
sessionsClient = SessionsClient.create(sessionsSettings);
SessionName session = SessionName.of(projectId, senderId);
com.google.cloud.dialogflow.v2.TextInput.Builder textInput = TextInput.newBuilder().setText(message)
.setLanguageCode(languageCode);
QueryInput queryInput = QueryInput.newBuilder().setText(textInput).build();
DetectIntentResponse response = sessionsClient.detectIntent(session, queryInput);
queryResult = response.getQueryResult();
} catch (Exception e) {
e.printStackTrace();
}
finally {
sessionsClient.close();
}
The shorter values on the graph highlights the use of client.close(). Without that the threads were stuck in Parking State.
I had a similar issue recently using Google cloud task API.
Channel ManagedChannelImpl{logId=5, target=cloudtasks.googleapis.com:443} was
not shutdown properly!!! ~*~*~* (ManagedChannelOrphanWrapper.java:159)
Make sure to call shutdown()/shutdownNow() and wait until awaitTermination() returns true.
In this case CloudTasksClient object implements AutoCloseable and we should call its .close() method after its done.
We can use a try block like this which would auto close when it's done.
try( CloudTasksClient client = CloudTasksClient.create()){
CloudTaskQueue taskQueue = new CloudTaskQueue(client);
}
or Add try/finally
CloudTasksClient client =null;
try{
client = CloudTasksClient.create() ;
CloudTaskQueue taskQueue = new CloudTaskQueue(client);
} catch (IOException e) {
e.printStackTrace();
} finally {
client.close();
}
In my case, I just shutdown the channel in try,finally block:
ManagedChannel channel = ManagedChannelBuilder.forAddress...
try{
...
}finally {
channel.shutdown();
}
HyperLeger Sawtooth supports subscription to events in the Transaction Processor. However is there a way to create application-specific events in the Transaction Processor something like in Python example here: https://www.jacklllll.xyz/blog/2019/04/08/sawtooth/
ctx.addEvent(
'agreement/create',
[['name', 'agreement'],
['address', address],
['buyer name', agreement.BuyerName],
['seller name', agreement.SellerName],
['house id', agreement.HouseID],
['creator', signer]],
null)
In the current Sawtooth-Java SDK v0.1.2 the only override is
apply(TpProcessRequest, State)
Without the context. However on the documentation here: https://github.com/hyperledger/sawtooth-sdk-java/blob/master/sawtooth-sdk-transaction-processor/src/main/java/sawtooth/sdk/processor/TransactionHandler.java
addEvent(TpProcessRequest, Context)
So far I have managed to listen to events sawtooth/state-delta however this gives me all state changes of that tx-family
import sawtooth.sdk.protobuf.EventSubscription;
import sawtooth.sdk.protobuf.EventFilter;
import sawtooth.sdk.protobuf.ClientEventsSubscribeRequest;
import sawtooth.sdk.protobuf.ClientEventsSubscribeResponse;
import sawtooth.sdk.protobuf.ClientEventsUnsubscribeRequest;
import sawtooth.sdk.protobuf.Message;
EventFilter filter = EventFilter.newBuilder()
.setKey("address")
.setMatchString(nameSpace.concat(".*"))
.setFilterType(EventFilter.FilterType.REGEX_ANY)
.build();
EventSubscription subscription = EventSubscription.newBuilder()
.setEventType("sawtooth/state-delta")
.addFilters(filter)
.build();
context = new ZContext();
socket = context.createSocket(ZMQ.DEALER);
socket.connect("tcp://sawtooth-rest:4004");
ClientEventsSubscribeRequest request = ClientEventsSubscribeRequest.newBuilder()
.addSubscriptions(subscription)
.build();
message = Message.newBuilder()
.setCorrelationId("123")
.setMessageType(Message.MessageType.CLIENT_EVENTS_SUBSCRIBE_REQUEST)
.setContent(request.toByteString())
.build();
socket.send(message.toByteArray());
Once the Message.MessageType.CLIENT_EVENTS_SUBSCRIBE_REQUEST is registered I get messages in a thread loop.
I was hoping that in the TransactionHandler I should be able to addEvent() or create some type of event(s) that can then be subscribed using the Java SDK.
Has anyone else tried creating custom events in JAVA on Sawtooth?
Here's an example of an event being added in Python. Java would be similar.
You add your custom-named event in your Transaction Processor:
context.add_event(event_type="cookiejar/bake", attributes=[("cookies-baked", amount)])
See https://github.com/danintel/sawtooth-cookiejar/blob/master/pyprocessor/cookiejar_tp.py#L138
Here are examples of event handlers written in Python and Go:
https://github.com/danintel/sawtooth-cookiejar/tree/master/events
Java would also be similar. Basically the logic in the event handler is:
Subscribe to the events you want to listen to
Send the request to the Validator
Read and parse the subscription response
In a loop, listen to subscribed events in a loop
After exiting the loop (if ever), unsubscribe rom events
For those who are trying to use the java SDK for event publishing/subscribing - There is no direct API available. Atleast I cound't find it and I am using 1.0 docker images.
So to publish your events you need to publish directly to the sawtooth rest-api server. Need to take care of following:
You need a context id which is valid only per request. You get this from request in your apply() method. (Code below). So make sure you publish the event during transaction publishing i.e. during the implementation of apply() method
Event structure will be as described in docs here
If the transaction is successful and block is committed you get the event in event subscriber else it doesn't show up.
While creating a subscriber you need to subscribe to the
sawtooth/block-commit
event and add an additional subscription to your type of event e.g. "myNS/my-event"
Sample Event Publishing code:
public void apply(TpProcessRequest request, State state) throws InvalidTransactionException, InternalError {
///process your trasaction first
sawtooth.sdk.messaging.Stream eventStream = new Stream("tcp://localhost:4004"); // make this in the constructor of class NOT here
List<Attribute> attrList = new ArrayList<>();
Attribute attrs = Attribute.newBuilder().setKey("someKey").setValue("someValue").build();
attrList.add(attrs);
Event appEvent = Event.newBuilder().setEventType("myNS/my-event-type")
.setData( <some ByteString here>).addAllAttributes(attrList).build();
TpEventAddRequest addEventRequest = TpEventAddRequest.newBuilder()
.setContextId(request.getContextId()).setEvent(appEvent).build();
Future sawtoothSubsFuture = eventStream.send(MessageType.TP_EVENT_ADD_REQUEST, addEventRequest.toByteString());
try {
System.out.println(sawtoothSubsFuture.getResult());
} catch (Exception e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
then you subscribe to events as such (inspired from the marketplace samples):
try {
EventFilter eventFilter = EventFilter.newBuilder().setKey("address")
.setMatchString(String.format("^%s.*", "myNamespace"))
.setFilterType(FilterType.REGEX_ANY).build();
//subscribe to sawtooth/block-commit
EventSubscription deltaSubscription = EventSubscription.newBuilder().setEventType("sawtooth/block-commit")
.addFilters(eventFilter)
.build();
EventSubscription mySubscription = EventSubscription.newBuilder().setEventType("myNS/my-event-type")
.build(); //no filters added for my events.
ClientEventsSubscribeRequest subsReq = ClientEventsSubscribeRequest.newBuilder()
.addLastKnownBlockIds("0000000000000000").addSubscriptions(deltaSubscription).addSubscriptions(mySubscription)
.build();
Future sawtoothSubsFuture = eventStream.send(MessageType.CLIENT_EVENTS_SUBSCRIBE_REQUEST,
subsReq.toByteString());
ClientEventsSubscribeResponse eventSubsResp = ClientEventsSubscribeResponse
.parseFrom(sawtoothSubsFuture.getResult());
System.out.println("eventSubsResp.getStatus() :: " + eventSubsResp.getStatus());
if (eventSubsResp.getStatus().equals(ClientEventsSubscribeResponse.Status.UNKNOWN_BLOCK)) {
System.out.println("Unknown block ");
// retry connection if this happens by calling this same method
}
if(!eventSubsResp.getStatus().equals(ClientEventsSubscribeResponse.Status.OK)) {
System.out.println("Subscription failed with status " + eventSubsResp.getStatus());
throw new RuntimeException("cannot connect ");
} else {
isActive = true;
System.out.println("Making active ");
}
while(isActive) {
Message eventMsg = eventStream.receive();
EventList eventList = EventList.parseFrom(eventMsg.getContent());
for (Event event : eventList.getEventsList()) {
System.out.println("An event ::::");
System.out.println(event);
}
}
} catch (Exception e) {
e.printStackTrace();
}
I'm newbie to Apache Camel. In hp nonstop there is a Receiver that receives events generated by event manager assume like a stream. My goal is to setup a consumer end point which receives the incoming message and process it through Camel.
Another end point I simply need to write it in logs. From my study I understood that for Consumer end point I need to create own component and configuration would be like
from("myComp:receive").to("log:net.javaforge.blog.camel?level=INFO")
Here is my code snippet which receives message from event system.
Receive receive = com.tandem.ext.guardian.Receive.getInstance();
byte[] maxMsg = new byte[500]; // holds largest possible request
short errorReturn = 0;
do { // read messages from $receive until last close
try {
countRead = receive.read(maxMsg, maxMsg.length);
String receivedMessage=new String(maxMsg, "UTF-8");
//Here I need to handover receivedMessage to camel
} catch (ReceiveNoOpeners ex) {
moreOpeners = false;
} catch(Exception e) {
moreOpeners = false;
}
} while (moreOpeners);
Can someone guide with some hints how to make this as a Consumer.
The 10'000 feet view is this:
You need to start out with implementing a component. The easiest way to get started is to extend org.apache.camel.impl.DefaultComponent. The only thing you have to do is override DefaultComponent::createEndpoint(..). Quite obviously what it does is create your endpoint.
So the next thing you need is to implement your endpoint. Extend org.apache.camel.impl.DefaultEndpoint for this. Override at the minimum DefaultEndpoint::createConsumer(Processor) to create your own consumer.
Last but not least you need to implement the consumer. Again, best ist to extend org.apache.camel.impl.DefaultConsumer. The consumer is where your code has to go that generates your messages. Through the constructor you receive a reference to your endpoint. Use the endpoint reference to create a new Exchange, populate it and send it on its way along the route. Something along the lines of
Exchange ex = endpoint.createExchange(ExchangePattern.InOnly);
setMyMessageHeaders(ex.getIn(), myMessagemetaData);
setMyMessageBody(ex.getIn(), myMessage);
getAsyncProcessor().process(ex, new AsyncCallback() {
#Override
public void done(boolean doneSync) {
LOG.debug("Mssage was processed " + (doneSync ? "synchronously" : "asynchronously"));
}
});
I recommend you pick a simple component (DirectComponent ?) as an example to follow.
Herewith adding my own consumer component may help someone.
public class MessageConsumer extends DefaultConsumer {
private final MessageEndpoint endpoint;
private boolean moreOpeners = true;
public MessageConsumer(MessageEndpoint endpoint, Processor processor) {
super(endpoint, processor);
this.endpoint = endpoint;
}
#Override
protected void doStart() throws Exception {
int countRead=0; // number of bytes read
do {
countRead++;
String msg = String.valueOf(countRead)+" "+System.currentTimeMillis();
Exchange ex = endpoint.createExchange(ExchangePattern.InOnly);
ex.getIn().setBody(msg);
getAsyncProcessor().process(ex, new AsyncCallback() {
#Override
public void done(boolean doneSync) {
log.info("Mssage was processed " + (doneSync ? "synchronously" : "asynchronously"));
}
});
// This is an echo server so echo request back to requester
} while (moreOpeners);
}
#Override
protected void doStop() throws Exception {
moreOpeners = false;
log.debug("Message processor is shutdown");
}
}
Using JXTA 2.6 from http://jxse.kenai.com/ I want to create application that can run multiple peers on one or more hosts. The peers should be able to find each other in a group and send direct messages as well as propagate messages.
What would a simple hello world type of application look like that meet these requirements?
I created this question with the intention of supplying a tutorial like answer, an answer I tried very hard to find two months ago when starting to look at JXTA for a uni project. Feel free to add your own answers or improve on mine. I will wait a few days and accept the best one.
Introduction to JXTA 2.6 Peer discovery and pipe messaging
The guide I wish I had 2 months ago =)
After spending a lot of time during a university course building
a JXTA p2p application I feel a lot of the frustrations and
confusion I went through could have been avoided with a good
starting point.
The jar files you will need can be found here:
https://oss.sonatype.org/content/repositories/comkenaijxse-057/com/kenai/jxse/jxse/2.6/jxse-2.6.jar
http://sourceforge.net/projects/practicaljxta/files/lib-dependencies-2.6.zip/download
Throw them into Yourproject/lib, open up eclipse, create a new project "Yourproject" and it should sort out
importing the libraries for you.
You will soon come to realize that almost any information on the web is out dated, very out dated.
You will also run into a lot of very confusing error messages and most of them can be avoided by
going through this check list.
Is your firewall turned off or at least open for the ports you use?
You can disable iptables using "sudo service iptables stop" under Fedora.
Check spelling! Many times when joining groups or trying to send messages spelling the group name wrong or not using the
exact same advertisement when looking for peers and services or opening pipes will cause very confusing messages.
Was trying to figure out why my pipe connections timed out when I spotted the group names being "Net info" and "Net_info".
Are you using a JXTA home directory? One per each peer you run on the same computer?
Do you really use a unique peer id? The seed provided to IDFactory need to be long enough or else you will get duplicates.
Turn off SELinux. I have had SELinux turned off during the development but can imagine it causing errors.
While it is common to group all fields together I introduce them as I go to show where they are needed.
Note: This will not work in 2.7. Some issue with PSE membership I think.
public class Hello implements DiscoveryListener, PipeMsgListener {
// When developing you should handle these exceptions, I don't to lessen the clutter of start()
public static void main(String[] args) throws PeerGroupException, IOException {
// JXTA logs a lot, you can configure it setting level here
Logger.getLogger("net.jxta").setLevel(Level.ALL);
// Randomize a port to use with a number over 1000 (for non root on unix)
// JXTA uses TCP for incoming connections which will conflict if more than
// one Hello runs at the same time on one computer.
int port = 9000 + new Random().nextInt(100);
Hello hello = new Hello(port);
hello.start();
hello.fetch_advertisements();
}
private String peer_name;
private PeerID peer_id;
private File conf;
private NetworkManager manager;
public Hello(int port) {
// Add a random number to make it easier to identify by name, will also make sure the ID is unique
peer_name = "Peer " + new Random().nextInt(1000000);
// This is what you will be looking for in Wireshark instead of an IP, hint: filter by "jxta"
peer_id = IDFactory.newPeerID(PeerGroupID.defaultNetPeerGroupID, peer_name.getBytes());
// Here the local peer cache will be saved, if you have multiple peers this must be unique
conf = new File("." + System.getProperty("file.separator") + peer_name);
// Most documentation you will find use a deprecated network manager setup, use this one instead
// ADHOC is usually a good starting point, other alternatives include Edge and Rendezvous
try {
manager = new NetworkManager(
NetworkManager.ConfigMode.ADHOC,
peer_name, conf.toURI());
}
catch (IOException e) {
// Will be thrown if you specify an invalid directory in conf
e.printStackTrace();
}
NetworkConfigurator configurator;
try {
// Settings Configuration
configurator = manager.getConfigurator();
configurator.setTcpPort(port);
configurator.setTcpEnabled(true);
configurator.setTcpIncoming(true);
configurator.setTcpOutgoing(true);
configurator.setUseMulticast(true);
configurator.setPeerID(peer_id);
}
catch (IOException e) {
// Never caught this one but let me know if you do =)
e.printStackTrace();
}
}
private static final String subgroup_name = "Make sure this is spelled the same everywhere";
private static final String subgroup_desc = "...";
private static final PeerGroupID subgroup_id = IDFactory.newPeerGroupID(PeerGroupID.defaultNetPeerGroupID, subgroup_name.getBytes());
private static final String unicast_name = "This must be spelled the same too";
private static final String multicast_name = "Or else you will get the wrong PipeID";
private static final String service_name = "And dont forget it like i did a million times";
private PeerGroup subgroup;
private PipeService pipe_service;
private PipeID unicast_id;
private PipeID multicast_id;
private PipeID service_id;
private DiscoveryService discovery;
private ModuleSpecAdvertisement mdadv;
public void start() throws PeerGroupException, IOException {
// Launch the missiles, if you have logging on and see no exceptions
// after this is ran, then you probably have at least the jars setup correctly.
PeerGroup net_group = manager.startNetwork();
// Connect to our subgroup (all groups are subgroups of Netgroup)
// If the group does not exist, it will be automatically created
// Note this is suggested deprecated, not sure what the better way is
ModuleImplAdvertisement mAdv = null;
try {
mAdv = net_group.getAllPurposePeerGroupImplAdvertisement();
} catch (Exception ex) {
System.err.println(ex.toString());
}
subgroup = net_group.newGroup(subgroup_id, mAdv, subgroup_name, subgroup_desc);
// A simple check to see if connecting to the group worked
if (Module.START_OK != subgroup.startApp(new String[0]))
System.err.println("Cannot start child peergroup");
// We will spice things up to a more interesting level by sending unicast and multicast messages
// In order to be able to do that we will create to listeners that will listen for
// unicast and multicast advertisements respectively. All messages will be handled by Hello in the
// pipeMsgEvent method.
unicast_id = IDFactory.newPipeID(subgroup.getPeerGroupID(), unicast_name.getBytes());
multicast_id = IDFactory.newPipeID(subgroup.getPeerGroupID(), multicast_name.getBytes());
pipe_service = subgroup.getPipeService();
pipe_service.createInputPipe(get_advertisement(unicast_id, false), this);
pipe_service.createInputPipe(get_advertisement(multicast_id, true), this);
// In order to for other peers to find this one (and say hello) we will
// advertise a Hello Service.
discovery = subgroup.getDiscoveryService();
discovery.addDiscoveryListener(this);
ModuleClassAdvertisement mcadv = (ModuleClassAdvertisement)
AdvertisementFactory.newAdvertisement(ModuleClassAdvertisement.getAdvertisementType());
mcadv.setName("STACK-OVERFLOW:HELLO");
mcadv.setDescription("Tutorial example to use JXTA module advertisement Framework");
ModuleClassID mcID = IDFactory.newModuleClassID();
mcadv.setModuleClassID(mcID);
// Let the group know of this service "module" / collection
discovery.publish(mcadv);
discovery.remotePublish(mcadv);
mdadv = (ModuleSpecAdvertisement)
AdvertisementFactory.newAdvertisement(ModuleSpecAdvertisement.getAdvertisementType());
mdadv.setName("STACK-OVERFLOW:HELLO");
mdadv.setVersion("Version 1.0");
mdadv.setCreator("sun.com");
mdadv.setModuleSpecID(IDFactory.newModuleSpecID(mcID));
mdadv.setSpecURI("http://www.jxta.org/Ex1");
service_id = IDFactory.newPipeID(subgroup.getPeerGroupID(), service_name.getBytes());
PipeAdvertisement pipeadv = get_advertisement(service_id, false);
mdadv.setPipeAdvertisement(pipeadv);
// Let the group know of the service
discovery.publish(mdadv);
discovery.remotePublish(mdadv);
// Start listening for discovery events, received by the discoveryEvent method
pipe_service.createInputPipe(pipeadv, this);
}
private static PipeAdvertisement get_advertisement(PipeID id, boolean is_multicast) {
PipeAdvertisement adv = (PipeAdvertisement )AdvertisementFactory.
newAdvertisement(PipeAdvertisement.getAdvertisementType());
adv.setPipeID(id);
if (is_multicast)
adv.setType(PipeService.PropagateType);
else
adv.setType(PipeService.UnicastType);
adv.setName("This however");
adv.setDescription("does not really matter");
return adv;
}
#Override public void discoveryEvent(DiscoveryEvent event) {
// Found another peer! Let's say hello shall we!
// Reformatting to create a real peer id string
String found_peer_id = "urn:jxta:" + event.getSource().toString().substring(7);
send_to_peer("Hello", found_peer_id);
}
private void send_to_peer(String message, String found_peer_id) {
// This is where having the same ID is important or else we wont be
// able to open a pipe and send messages
PipeAdvertisement adv = get_advertisement(unicast_id, false);
// Send message to all peers in "ps", just one in our case
Set<PeerID> ps = new HashSet<PeerID>();
try {
ps.add((PeerID)IDFactory.fromURI(new URI(found_peer_id)));
}
catch (URISyntaxException e) {
// The JXTA peer ids need to be formatted as proper urns
e.printStackTrace();
}
// A pipe we can use to send messages with
OutputPipe sender = null;
try {
sender = pipe_service.createOutputPipe(adv, ps, 10000);
}
catch (IOException e) {
// Thrown if there was an error opening the connection, check firewall settings
e.printStackTrace();
}
Message msg = new Message();
MessageElement fromElem = null;
MessageElement msgElem = null;
try {
fromElem = new ByteArrayMessageElement("From", null, peer_id.toString().getBytes("ISO-8859-1"), null);
msgElem = new ByteArrayMessageElement("Msg", null, message.getBytes("ISO-8859-1"), null);
} catch (UnsupportedEncodingException e) {
// Yepp, you want to spell ISO-8859-1 correctly
e.printStackTrace();
}
msg.addMessageElement(fromElem);
msg.addMessageElement(msgElem);
try {
sender.send(msg);
} catch (IOException e) {
// Check, firewall, settings.
e.printStackTrace();
}
}
#Override public void pipeMsgEvent(PipeMsgEvent event) {
// Someone is sending us a message!
try {
Message msg = event.getMessage();
byte[] msgBytes = msg.getMessageElement("Msg").getBytes(true);
byte[] fromBytes = msg.getMessageElement("From").getBytes(true);
String from = new String(fromBytes);
String message = new String(msgBytes);
System.out.println(message + " says " + from);
}
catch (Exception e) {
// You will notice that JXTA is not very specific with exceptions...
e.printStackTrace();
}
}
/**
* We will not find anyone if we are not regularly looking
*/
private void fetch_advertisements() {
new Thread("fetch advertisements thread") {
public void run() {
while(true) {
discovery.getRemoteAdvertisements(null, DiscoveryService.ADV, "Name", "STACK-OVERFLOW:HELLO", 1, null);
try {
sleep(10000);
}
catch(InterruptedException e) {}
}
}
}.start();
}
}