In Hazelcast jet how to Sink Map object to BatchSource? - java

I am new to Hazelcast jet and I am not getting how to Sink simple java.util.Map to BatchSource?
I have tried below but not seems to be working.
Map<String, Object> data =new HashMap<String, Object>();
data.put("xyz", "abc");
Way 1:
Map<String, Object> am = jetInstance.getMap("abc");
am.putAll(data);
BatchSource batchSource = Sources.map("abc");
I gives error: java.util.HashMap cannot be cast to java.util.Map$Entry
Way 2: BatchSource batchSource = TestSources.items(data);
and same error please help what wrong am I doing I am trying to creating pipeline but not going forward.

I think the problem must be somewhere other than in the code you've shared; the BatchSource definition isn't incorrect. I just did a quick test and the following runs fine (Tested on 5.0 version):
public static void main(String[] args) {
Config config = new Config();
config.getJetConfig().setEnabled(true);
HazelcastInstance hz = Hazelcast.newHazelcastInstance(config);
Map<String,Object> map = hz.getMap("map");
map.put("A", "AnObject");
map.put("B", "AnotherObject");
BatchSource source = Sources.map("map");
Pipeline p = Pipeline.create();
p.readFrom(source)
.writeTo(Sinks.logger());
hz.getJet().newJob(p).join();
}
With output:
INFO: [192.168.86.25]:5701 [dev] [5.0] Start execution of job '0702-23dd-8800-0001', execution 0702-23dd-8801-0001 from coordinator [192.168.86.25]:5701
Oct 25, 2021 9:11:44 AM com.hazelcast.jet.impl.connector.WriteLoggerP
INFO: [192.168.86.25]:5701 [dev] [5.0] [0702-23dd-8800-0001/loggerSink#0] B=AnotherObject
Oct 25, 2021 9:11:44 AM com.hazelcast.jet.impl.connector.WriteLoggerP
INFO: [192.168.86.25]:5701 [dev] [5.0] [0702-23dd-8800-0001/loggerSink#0] A=AnObject
Oct 25, 2021 9:11:44 AM com.hazelcast.jet.impl.MasterJobContext
INFO: [192.168.86.25]:5701 [dev] [5.0] Execution of job '0702-23dd-8800-0001', execution 0702-23dd-8801-0001 completed successfully
(Since a map is unordered in this case the entries are seen in a different order than the way they were added to the map)

Related

Java AWS DynamoDB how to increment number

I'm trying to add 1 to a number, and then get the new number back.
I can't get my UpdateItemSpec right. Please help. Every example out there seems to show something different and none of it is working.
Here is my code:
AmazonDynamoDBClient dbClient = new AmazonDynamoDBClient(
new BasicAWSCredentials("SECRET", "SECRET")
);
dbClient.setRegion(Region.getRegion(Regions.fromName("us-west-1")));
DynamoDB dynamoDB = new DynamoDB(dbClient);
Table table = dynamoDB.getTable("NumTable");
GetItemSpec spec = new GetItemSpec()
.withPrimaryKey("PKey","OrderNumber");
Item item = table.getItem(spec);
logger.info(item.toJSONPretty());
UpdateItemSpec updateItemSpec = new UpdateItemSpec()
.withPrimaryKey("Pkey",
"OrderNumber")
.withReturnValues("UPDATED_NEW")
.withUpdateExpression("ADD #k :incr")
.withNameMap(new NameMap().with("#k", "NumVal"))
.withValueMap(
new ValueMap()
.withNumber(":incr", 1));
//.withString(":incr", "{N:\"1\"}"));
//I've tried a million other ways too!
UpdateItemOutcome outcome = table.updateItem(updateItemSpec);
logger.info(outcome.getItem().toJSONPretty());
Console shows the first get part working:
Sat Nov 09 00:46:07 UTC - 2019-11-09 00:46:07 f1475303-7585-4804-8a42-2e0a9b16b1dc INFO Commission:88 - {
Sat Nov 09 00:46:07 UTC - "NumVal" : 200000,
Sat Nov 09 00:46:07 UTC - "PKey" : "OrderNumber"
Sat Nov 09 00:46:07 UTC - }
But the update part gives this error (among others):
Sat Nov 09 00:46:08 UTC - The provided key element does not match the schema (Service: AmazonDynamoDBv2; Status Code: 400; Error Code: ValidationException; Request ID: 8TPDT2EVMC0G0GF3IFK7SU6777VV4KQNSO5AEMVJF66Q9ASUAAJG): com.amazonaws.services.dynamodbv2.model.AmazonDynamoDBException com.amazonaws.services.dynamodbv2.model.AmazonDynamoDBException: The provided key element does not match the schema (Service: AmazonDynamoDBv2; Status Code: 400; Error Code: ValidationException; Request ID: 8TPDT2EVMC0G0GF3IFK7SU6777VV4KQNSO5AEMVJF66Q9ASUAAJG) at [........]
I really feel like the key element does match the schema :'(
Here is a picture from my AWS console:
Your implementation looks fine to me. The error is because of typo error in your UpdateItemSpec code.
UpdateItemSpec updateItemSpec = new UpdateItemSpec()
.withPrimaryKey("Pkey",
"OrderNumber")
The typo is "Pkey". It should be "PKey", which is why it works in GetItemSpec code.

How to add additional field to beam FileIO.matchAll() result?

I have a PCollection of KV where key is gcs file_patterns and value is some additional info of the files (e.g., the "Source" systems that generated the files). E.g.,
KV("gs://bucket1/dir1/*", "SourceX"),
KV("gs://bucket1/dir2/*", "SourceY")
I need a PTransferm to expand the file_patterns to all matching files in the GCS folders, and keep the "Source" field. E.g., if there are two files X1.dat, X2.dat under dir1 and one file (Y1.dat) under dir2, the output will be:
KV("gs://bucket1/dir1/X1.dat", "SourceX"),
KV("gs://bucket1/dir1/X2.dat", "SourceX")
KV("gs://bucket1/dir2/Y1.dat", "SourceY")
Could I use FileIO.matchAll() to achieve this? I am stuck on how to combine/join the "Source" field to the matching files. This is something I was trying, not quite there yet:
public PCollection<KV<String, String> expand(PCollection<KV<String, String>> filesAndSources) {
return filesAndSources
.apply("Get file names", Keys.create())
.apply(FileIO.matchAll())
.apply(FileIO.readMatches())
.apply(ParDo.of(
new DoFn<ReadableFile, KV<String, String>>() {
#ProcessElement
public void processElement(ProcessContext c) {
ReadableFile file = c.element();
String fileName = file.getMetadata().resourceId().toString();
c.output(KV.of(fileName, XXXXX)); // How to get the value field ("Source") from the input KV?
My difficulty is the last line, for XXXXX, how do I get the value field ("Source") from the input KV? Any way to "join" or "combine" the input KV's value back to the 'expended' keys, as one key (file_pattern) is expanded to multiple values.
Thank you!
MatchResult.Medata contains the resourceId you are already using but not the GCS path (with wildcards) it matched.
You can achieve what you want using side inputs. To demonstrate this I created the following filesAndSources (as per your comment this could be an input parameter so it can't be hard-coded downstream):
PCollection<KV<String, String>> filesAndSources = p.apply("Create file pattern and source pairs",
Create.of(KV.of("gs://" + Bucket + "/sales/*", "Sales"),
KV.of("gs://" + Bucket + "/events/*", "Events")));
I materialize this into a side input (in this case as Map). The key will be the glob pattern converted into a regex one (thanks to this answer) and the value will be the source string:
final PCollectionView<Map<String, String>> regexAndSources =
filesAndSources.apply("Glob pattern to RegEx", ParDo.of(new DoFn<KV<String, String>, KV<String, String>>() {
#ProcessElement
public void processElement(ProcessContext c) {
String regex = c.element().getKey();
StringBuilder out = new StringBuilder("^");
for(int i = 0; i < regex.length(); ++i) {
final char ch = regex.charAt(i);
switch(ch) {
case '*': out.append(".*"); break;
case '?': out.append('.'); break;
case '.': out.append("\\."); break;
case '\\': out.append("\\\\"); break;
default: out.append(ch);
}
}
out.append('$');
c.output(KV.of(out.toString(), c.element().getValue()));
}})).apply("Save as Map", View.asMap());
Then, after reading the filenames we can use the side input to parse each path to see which is the matching pattern/source pair:
filesAndSources
.apply("Get file names", Keys.create())
.apply(FileIO.matchAll())
.apply(FileIO.readMatches())
.apply(ParDo.of(new DoFn<ReadableFile, KV<String, String>>() {
#ProcessElement
public void processElement(ProcessContext c) {
ReadableFile file = c.element();
String fileName = file.getMetadata().resourceId().toString();
Set<Map.Entry<String,String>> patternSet = c.sideInput(regexAndSources).entrySet();
for (Map.Entry< String,String> pattern:patternSet)
{
if (fileName.matches(pattern.getKey())) {
String source = pattern.getValue();
c.output(KV.of(fileName, source));
}
}
}}).withSideInputs(regexAndSources))
Note that the regex conversion is done when before materializing the side input instead of here to avoid duplicate work.
The output, as expected in my case:
Feb 24, 2019 10:44:05 PM org.apache.beam.sdk.io.FileIO$MatchAll$MatchFn process
INFO: Matched 2 files for pattern gs://REDACTED/events/*
Feb 24, 2019 10:44:05 PM org.apache.beam.sdk.io.FileIO$MatchAll$MatchFn process
INFO: Matched 2 files for pattern gs://REDACTED/sales/*
Feb 24, 2019 10:44:05 PM com.dataflow.samples.RegexFileIO$3 processElement
INFO: key=gs://REDACTED/sales/sales1.csv, value=Sales
Feb 24, 2019 10:44:05 PM com.dataflow.samples.RegexFileIO$3 processElement
INFO: key=gs://REDACTED/sales/sales2.csv, value=Sales
Feb 24, 2019 10:44:05 PM com.dataflow.samples.RegexFileIO$3 processElement
INFO: key=gs://REDACTED/events/events1.csv, value=Events
Feb 24, 2019 10:44:05 PM com.dataflow.samples.RegexFileIO$3 processElement
INFO: key=gs://REDACTED/events/events2.csv, value=Events
Full code.

Error in request of client ws-security soap with wss4j and cxf

I made the changes you told me "pedrofb" and the exception change , I was wrongly calling the output exept of the request (cxfEndpoint.getInInterceptors (). Add (new PropertiesWSS4JOutInterceptor (outProps, cryto)), when it is really cxfEndpoint.getOutInterceptors () .add (new PropertiesWSS4JOutInterceptor (outProps, cryto)). Use the "cxfEndpoint.getOutInterceptors (). Add (new LoggingOutInterceptor ())" as you told me but it does not get to log anything in console. But the errors persist, now it gives me another exception:
oct 17, 2017 5:06:14 PM org.apache.cxf.service.factory.ReflectionServiceFactoryBean buildServiceFromClass
INFORMACIÓN: Creating Service {DGI_Modernizacion_Consolidado}WSPersonaGetActEmpresarialSoapPortService from class dgi_modernizacion_consolidado.WSPersonaGetActEmpresarialSoapPort
oct 17, 2017 5:06:17 PM org.apache.cxf.phase.PhaseInterceptorChain add
ADVERTENCIA: Skipping interceptor org.apache.cxf.binding.soap.saaj.SAAJInInterceptor$SAAJPreInInterceptor: Phase read specified does not exist.
oct 17, 2017 5:06:18 PM org.apache.cxf.ws.security.wss4j.WSS4JInInterceptor checkActions
ADVERTENCIA: Security processing failed (actions mismatch)
oct 17, 2017 5:06:18 PM org.apache.cxf.ws.security.wss4j.WSS4JInInterceptor handleMessage
ADVERTENCIA:
org.apache.ws.security.WSSecurityException: An error was discovered processing the <wsse:Security> header
at org.apache.cxf.ws.security.wss4j.WSS4JInInterceptor.checkActions(WSS4JInInterceptor.java:361)
at org.apache.cxf.ws.security.wss4j.WSS4JInInterceptor.handleMessage(WSS4JInInterceptor.java:317)
at org.apache.cxf.ws.security.wss4j.WSS4JInInterceptor.handleMessage(WSS4JInInterceptor.java:96)
at org.apache.cxf.phase.PhaseInterceptorChain.doIntercept(PhaseInterceptorChain.java:271)
at org.apache.cxf.endpoint.ClientImpl.doInvoke(ClientImpl.java:530)
at org.apache.cxf.endpoint.ClientImpl.invoke(ClientImpl.java:463)
at org.apache.cxf.endpoint.ClientImpl.invoke(ClientImpl.java:366)
at org.apache.cxf.endpoint.ClientImpl.invoke(ClientImpl.java:319)
at org.apache.cxf.frontend.ClientProxy.invokeSync(ClientProxy.java:96)
at org.apache.cxf.jaxws.JaxWsClientProxy.invoke(JaxWsClientProxy.java:133)
at com.sun.proxy.$Proxy34.execute(Unknown Source)
at t2voice.dgi.client_dgi.App.main(App.java:201)
I have had many changes and that is why I have confused you, sorry, please forgive me. This is the code corresponding to new error. What I can be doing wrong?
JaxWsProxyFactoryBean factory = new JaxWsProxyFactoryBean();
factory.setServiceClass(WSPersonaGetActEmpresarialSoapPort.class);
factory.setAddress("https://server.info.digitalsound.gub:6000/ePrueba/ws_personaGetActEmpresarialPrueba?wsdl");
WSPersonaGetActEmpresarialSoapPort port = (WSPersonaGetActEmpresarialSoapPort) factory.create();
org.apache.cxf.endpoint.Client client = ClientProxy.getClient(port);
Endpoint cxfEndpoint = client.getEndpoint();
Map<String, Object> outProps = new HashMap<String, Object>();
outProps.put(WSHandlerConstants.ACTION, WSHandlerConstants.SIGNATURE);
outProps.put(WSHandlerConstants.USER, "alias");
outProps.put(WSHandlerConstants.SIG_PROP_FILE, "ws.properties");
outProps.put(WSHandlerConstants.SIG_KEY_ID, "DirectReference");
outProps.put(WSHandlerConstants.PASSWORD_TYPE, WSConstants.PW_TEXT);
outProps.put(WSHandlerConstants.PW_CALLBACK_CLASS,ClientPasswordCallback.class.getName());
Map<String, Object> inProps = new HashMap<String, Object>();
inProps.put(WSHandlerConstants.ACTION, WSHandlerConstants.SIGNATURE);
inProps.put(WSHandlerConstants.USER, "alias");
inProps.put(WSHandlerConstants.SIG_PROP_FILE, "ws.properties");
inProps.put(WSHandlerConstants.SIG_KEY_ID, "DirectReference");
cxfEndpoint.getInInterceptors().add(new PropertiesWSS4JInInterceptor(inProps, cryto));
cxfEndpoint.getOutInterceptors().add(new PropertiesWSS4JOutInterceptor(outProps, cryto));
WSPersonaGetActEmpresarialExecute parameters = new WSPersonaGetActEmpresarialExecute();
parameters.setRut("43557783445");
WSPersonaGetActEmpresarialExecuteResponse resp = port.execute(parameters);
and this is the corresponding ws.properties
org.apache.ws.security.crypto.provider=org.apache.ws.security.components.crypto.Merlin
org.apache.ws.security.crypto.merlin.keystore.alias=alias
org.apache.ws.security.crypto.merlin.keystore.type = PKCS12
org.apache.ws.security.crypto.merlin.keystore.password=keystore_pass
org.apache.ws.security.crypto.merlin.file=keystore.pfx

Hazelcast unique ScheduledExecutorService on replicated map gets lost on node shutdown

I'm trying to run a Hazelcast ReplicatedMap spread over multiple nodes for caching. The map will have about 60.000 entries and the loading/creation of entries is really expensive.
The entries of the map occasionally become invalid, need to be updated or removed or new entries have to be inserted in the map. For this I thought about a scheduled service that updates the map on a regular basis.
To prevent concurrency and costly duplicate creation of new entries there should be only one reload service.
To test this I made a little test case:
public class HazelTest {
private ReplicatedMap<Long, Bean> hzMap;
private HazelcastInstance instance;
public HazelTest() {
instance = Hazelcast.newHazelcastInstance();
hzMap = instance.getReplicatedMap("UniqueName");
IScheduledExecutorService scheduler = instance.getScheduledExecutorService("ExecutorService");
try {
scheduler.scheduleAtFixedRate(new BeanReloader(), 5, 10, TimeUnit.SECONDS);
} catch (DuplicateTaskException ex) {
System.out.println("reloader already running");
}
}
public static void main(String[] args) throws Exception {
Random random = new Random(System.currentTimeMillis());
HazelTest test = new HazelTest();
System.out.println("Start ...");
long i = 0;
try {
while (true) {
i++;
Bean bean = test.hzMap.get((long) random.nextInt(1000));
if (bean != null) {
if (i % 100000 == 0) {
System.out.println("Bean: " + bean.toString());
}
bean.setName("NewName");
}
}
} finally {
test.close();
System.out.println("End.");
}
}
public void close() {
instance.getPartitionService().forceLocalMemberToBeSafe(5, TimeUnit.SECONDS);
if (instance.getPartitionService().isLocalMemberSafe()) {
instance.shutdown();
} else {
System.out.println("Error!!!!!");
}
}
}
The reloader:
public class BeanReloader implements NamedTask, Runnable, HazelcastInstanceAware, Serializable {
private transient HazelcastInstance hazelcastInstance;
#Override
public void run() {
System.out.println("Bean Reload ....");
for (long i = 0; i < 200; i++) {
Bean bean = new Bean(i, "Bean " + i);
ReplicatedMap<Object, Object> map = hazelcastInstance.getReplicatedMap("UniqueName");
map.put(i, bean);
}
System.out.println("Reload end.");
}
#Override
public void setHazelcastInstance(HazelcastInstance hazelcastInstance) {
this.hazelcastInstance = hazelcastInstance;
}
#Override
public String getName() {
return "BeanReloader";
}
}
The bean has only 2 fields for test purpose:
public class Bean implements Serializable {
private long id;
private String name;
// getter and setter
}
Now when I run this in different terminals on my machine (or over network, does make no difference) the nodes show the desired behaviour: the service is only running on one node at a time - when I start another one I get a DuplicateTaskException.
But when the node currently executing the task goes down the service is not always switched to another node. There is about 2/3 chance that the service is completely lost and not running on any of the remaining nodes.
Now my questions: is this behaviour normal? Do I have to check for running service myself, and if so via what api?
Or am I getting something wrong and there is another way to achieve my goals? Is hazelcast replicated map the right aproach in the first place?
Edit: edited the code as the simplifications I made for posting did not show the error any more.
There are no errors in the log when a node leaves and the service is not restored. The only logging is on one node:
Mär 06, 2017 9:54:08 AM com.hazelcast.nio.tcp.TcpIpConnection
INFORMATION: [X.X.X.X]:5702 [dev] [3.8] Connection[id=8, /X.X.X.X:5702->/X.X.X.X:54226, endpoint=[X.X.X.X]:5703, alive=false, type=MEMBER] closed. Reason: Connection closed by the other side
Mär 06, 2017 9:54:08 AM com.hazelcast.nio.tcp.InitConnectionTask
INFORMATION: [X.X.X.X]:5702 [dev] [3.8] Connecting to /X.X.X.X:5703, timeout: 0, bind-any: true
Mär 06, 2017 9:54:08 AM com.hazelcast.nio.tcp.InitConnectionTask
INFORMATION: [X.X.X.X]:5702 [dev] [3.8] Could not connect to: /X.X.X.X:5703. Reason: SocketException[Verbindungsaufbau abgelehnt to address /X.X.X.X:5703]
Mär 06, 2017 9:54:08 AM com.hazelcast.nio.tcp.InitConnectionTask
INFORMATION: [X.X.X.X]:5702 [dev] [3.8] Connecting to /X.X.X.X:5703, timeout: 0, bind-any: true
Mär 06, 2017 9:54:08 AM com.hazelcast.nio.tcp.InitConnectionTask
INFORMATION: [X.X.X.X]:5702 [dev] [3.8] Could not connect to: /X.X.X.X:5703. Reason: SocketException[Verbindungsaufbau abgelehnt to address /X.X.X.X:5703]
Mär 06, 2017 9:54:08 AM com.hazelcast.nio.tcp.InitConnectionTask
INFORMATION: [X.X.X.X]:5702 [dev] [3.8] Connecting to /X.X.X.X:5703, timeout: 0, bind-any: true
Mär 06, 2017 9:54:08 AM com.hazelcast.nio.tcp.InitConnectionTask
INFORMATION: [X.X.X.X]:5702 [dev] [3.8] Could not connect to: /X.X.X.X:5703. Reason: SocketException[Verbindungsaufbau abgelehnt to address /X.X.X.X:5703]
Bean: Bean{id=893, name='NewName'}
Mär 06, 2017 9:54:08 AM com.hazelcast.nio.tcp.InitConnectionTask
INFORMATION: [X.X.X.X]:5702 [dev] [3.8] Connecting to /X.X.X.X:5703, timeout: 0, bind-any: true
Mär 06, 2017 9:54:08 AM com.hazelcast.nio.tcp.InitConnectionTask
INFORMATION: [X.X.X.X]:5702 [dev] [3.8] Could not connect to: /X.X.X.X:5703. Reason: SocketException[Verbindungsaufbau abgelehnt to address /X.X.X.X:5703]
Mär 06, 2017 9:54:08 AM com.hazelcast.nio.tcp.TcpIpConnectionMonitor
WARNUNG: [X.X.X.X]:5702 [dev] [3.8] Removing connection to endpoint [X.X.X.X]:5703 Cause => java.net.SocketException {Verbindungsaufbau abgelehnt to address /X.X.X.X:5703}, Error-Count: 5
Mär 06, 2017 9:54:08 AM com.hazelcast.internal.cluster.ClusterService
INFORMATION: [X.X.X.X]:5702 [dev] [3.8] Removing Member [X.X.X.X]:5703 - 875ccc3a-dc10-4c21-815a-4b57ae41a6ff
Mär 06, 2017 9:54:08 AM com.hazelcast.internal.cluster.ClusterService
INFORMATION: [X.X.X.X]:5702 [dev] [3.8]
Members [3] {
Member [X.X.X.X]:5702 - 0ec92bb9-6330-4a3d-90ac-0ed374fd266c this
Member [X.X.X.X]:5701 - 0d1a832e-e1e8-4cad-8546-5a97c8c052c3
Member [X.X.X.X]:5704 - bd18c7f9-892e-430a-b1ab-da740cc7a6c5
}
Mär 06, 2017 9:54:08 AM com.hazelcast.transaction.TransactionManagerService
INFORMATION: [X.X.X.X]:5702 [dev] [3.8] Committing/rolling-back alive transactions of Member [X.X.X.X]:5703 - 875ccc3a-dc10-4c21-815a-4b57ae41a6ff, UUID: 875ccc3a-dc10-4c21-815a-4b57ae41a6ff
Mär 06, 2017 9:54:08 AM com.hazelcast.internal.partition.impl.MigrationManager
INFORMATION: [X.X.X.X]:5702 [dev] [3.8] Re-partitioning cluster data... Migration queue size: 204
and on the others basically the same except for the last line:
INFORMATION: [X.X.X.X]:5701 [dev] [3.8] Committing/rolling-back alive transactions of Member [X.X.X.X]:5703 - 875ccc3a-dc10-4c21-815a-4b57ae41a6ff, UUID: 875ccc3a-dc10-4c21-815a-4b57ae41a6ff
Did a testrun with debug logs on 4 nodes (hz1-4.log). Node 4 picked up the schedule. After one run I killed that node (log timestamp 15:28:50,955 in hz4.log). Here are the four logs from that run (note: due to the vast amount of logging I only pasted the logs from timestamp 15:28:50,955 on ...)
hz3.log
hz2.log
hz1.log
hz4.log
With that setup I can reliably reproduce the failure. I just start the 4 nodes with
mvn exec:java -Dexec.mainClass="de.tle.products.HazelTest" -Dnumber=X
where X is the number of the node (just a variable in the log4j conf file). After all four nodes a running kill the one currently running the schedule and then the schedule is not running on any of the remaining nodes.

Ice4J: Ice State Failed on 4G network

Does anyone know how to do the TURN portion of Ice4j? I've managed to code it so that it works when the phone is on WiFi, but not when its on the mobile network.
I'm sending agent info via TCP and then building the connection manually instead of using a signalling process. The TCP connection already works fine, so I don't think its the TCP issue. Maybe I'm building the agent wrong?
I know that you're supposed to use a TURN server if STUN doesn't work, and I provided a large list of public TURN servers, but I might be missing something. Maybe the packets aren't being sent out properly?
Error: (Mostly Failed to send ALLOCATE-REQUEST(0X3))
Sep 11, 2014 3:36:09 PM org.ice4j.ice.Agent createMediaStream
INFO: Create media stream for data
Sep 11, 2014 3:36:09 PM org.ice4j.ice.Agent createComponent
INFO: Create component data.1
Sep 11, 2014 3:36:09 PM org.ice4j.ice.Agent gatherCandidates
INFO: Gather candidates for component data.1
Sep 11, 2014 3:36:09 PM org.ice4j.ice.harvest.HostCandidateHarvester harvest
INFO: End candidate harvest within 160 ms, for org.ice4j.ice.harvest.HostCandidateHarvester, component: 1
Sep 11, 2014 3:36:09 PM org.ice4j.ice.harvest.StunCandidateHarvest sendRequest
INFO: Failed to send ALLOCATE-REQUEST(0x3)[attrib.count=3 len=32 tranID=0x9909DC6648016A67FDD4B2D8] through /192.168.0.8:5001/udp to stun2.l.google.com:19302:5001/udp
Sep 11, 2014 3:36:12 PM org.ice4j.ice.ConnectivityCheckClient processTimeout
INFO: timeout for pair: /fe80:0:0:0:c8ce:5a17:c339:cc40%4:5001/udp -> /fe80:0:0:0:14e8:f3ff:fef3:6a21:6001/udp (data.1), failing.
Sep 11, 2014 3:36:12 PM org.ice4j.ice.ConnectivityCheckClient processTimeout
INFO: timeout for pair: /fe80:0:0:0:380d:2a4c:b350:eea8%8:5001/udp -> /fe80:0:0:0:14e8:f3ff:fef3:6a21:6001/udp (data.1), failing.
Sep 11, 2014 3:36:12 PM org.ice4j.ice.ConnectivityCheckClient processTimeout
INFO: timeout for pair: /192.168.0.8:5001/udp -> /100.64.74.58:6001/udp (data.1), failing.
Sep 11, 2014 3:36:12 PM org.ice4j.ice.ConnectivityCheckClient processTimeout
INFO: timeout for pair: /192.168.0.8:5001/udp -> /100.64.74.58:6001/udp (data.1), failing.
Sep 11, 2014 3:36:12 PM org.ice4j.ice.ConnectivityCheckClient updateCheckListAndTimerStates
INFO: CheckList will failed in a few seconds if nosucceeded checks come
Sep 11, 2014 3:36:17 PM org.ice4j.ice.ConnectivityCheckClient$1 run
INFO: CheckList for stream data FAILED
Sep 11, 2014 3:36:17 PM org.ice4j.ice.Agent checkListStatesUpdated
INFO: ICE state is FAILED
Script (Both the server and the client sides have similar codes to this one):
Agent agent = new Agent();
agent.setControlling(false);
StunCandidateHarvester stunHarv = new StunCandidateHarvester(new TransportAddress("sip-communicator.net", port, Transport.UDP));
StunCandidateHarvester stun6Harv = new StunCandidateHarvester(new TransportAddress("ipv6.sip-communicator.net", port, Transport.UDP));
agent.addCandidateHarvester(stunHarv);
agent.addCandidateHarvester(stun6Harv);
String[] hostnames = new String[] { "130.79.90.150",
"2001:660:4701:1001:230:5ff:fe1a:805f",
"jitsi.org",
"numb.viagenie.ca",
"stun01.sipphone.com",
"stun.ekiga.net",
"stun.fwdnet.net",
"stun.ideasip.com",
"stun.iptel.org",
"stun.rixtelecom.se",
"stun.schlund.de",
"stun.l.google.com:19302",
"stun1.l.google.com:19302",
"stun2.l.google.com:19302",
"stun3.l.google.com:19302",
"stun4.l.google.com:19302",
"stunserver.org",
"stun.softjoys.com",
"stun.voiparound.com",
"stun.voipbuster.com",
"stun.voipstunt.com",
"stun.voxgratia.org",
"stun.xten.com",};
LongTermCredential longTermCredential = new LongTermCredential("guest", "anon");
for (String hostname : hostnames)
agent.addCandidateHarvester(new TurnCandidateHarvester(new TransportAddress(hostname, port, Transport.UDP), longTermCredential));
//Build a stream for agent
IceMediaStream stream = agent.createMediaStream("data");
try {
Component c = agent.createComponent(stream, Transport.UDP, port, port, port+100 );
String response = "";
List<LocalCandidate> remoteCandidates = c.getLocalCandidates();
for(Candidate<?> can : remoteCandidates) {
response += "||" + can.toString();
}
response = "Video||" + agent.getLocalUfrag() + "||" + agent.getLocalPassword() + "||" + c.getDefaultCandidate().toString() + response;
System.out.println("Server >>> " + response);
DataOutputStream outStream = new DataOutputStream(client.getOutputStream());
outStream.write(response.getBytes("UTF-8"));
outStream.flush();
List<IceMediaStream> streams = agent.getStreams();
for(IceMediaStream localStream : streams) {
List<Component> localComponents = localStream.getComponents();
for(Component localComponent : localComponents) {
for(int i = 3; i < info.length; i++) {
String[] detail = info[i].split(" "); //0: Foundation
//1: Component ID
//2: Transport
//3: Priority #
//4: Address (Needed with Port # to create Transport Address)
//5: Port # (Needed with Address to create Transport Address)
//6: -filler: "Type" is next field-
//7: Candidate Type
String[] foundation = detail[0].split(":"); //Turn "Candidate:#" -> "Candidate" and "#". We use "#"
localComponent.addRemoteCandidate(new RemoteCandidate(new TransportAddress(detail[4], Integer.valueOf(detail[5]), Transport.UDP), c, CandidateType.HOST_CANDIDATE, foundation[1], Long.valueOf(detail[3]), null));
}
String[] defaultDetail = info[3].split(" ");
String[] defaultFoundation = defaultDetail[0].split(":");
localComponent.setDefaultRemoteCandidate(new RemoteCandidate(new TransportAddress(defaultDetail[4], Integer.valueOf(defaultDetail[5]), Transport.UDP), c, CandidateType.HOST_CANDIDATE, defaultFoundation[1], Long.valueOf(defaultDetail[3]), null));
}
localStream.setRemoteUfrag(info[1]);
localStream.setRemotePassword(info[2]);
}
agent.startConnectivityEstablishment();
System.out.println("ICEServer <><><> Completed");
I realize now that your list of TURN servers seem to mostly actually be STUN servers (not sure about the first two). They should be added as STUN servers if anything:
agent.addCandidateHarvester(
new StunCandidateHarvester(
new TransportAddress(
InetAddress.getByName('stun.l.google.com'),
19302,
Transport.UDP)));

Categories

Resources