I have a PCollection of KV where key is gcs file_patterns and value is some additional info of the files (e.g., the "Source" systems that generated the files). E.g.,
KV("gs://bucket1/dir1/*", "SourceX"),
KV("gs://bucket1/dir2/*", "SourceY")
I need a PTransferm to expand the file_patterns to all matching files in the GCS folders, and keep the "Source" field. E.g., if there are two files X1.dat, X2.dat under dir1 and one file (Y1.dat) under dir2, the output will be:
KV("gs://bucket1/dir1/X1.dat", "SourceX"),
KV("gs://bucket1/dir1/X2.dat", "SourceX")
KV("gs://bucket1/dir2/Y1.dat", "SourceY")
Could I use FileIO.matchAll() to achieve this? I am stuck on how to combine/join the "Source" field to the matching files. This is something I was trying, not quite there yet:
public PCollection<KV<String, String> expand(PCollection<KV<String, String>> filesAndSources) {
return filesAndSources
.apply("Get file names", Keys.create())
.apply(FileIO.matchAll())
.apply(FileIO.readMatches())
.apply(ParDo.of(
new DoFn<ReadableFile, KV<String, String>>() {
#ProcessElement
public void processElement(ProcessContext c) {
ReadableFile file = c.element();
String fileName = file.getMetadata().resourceId().toString();
c.output(KV.of(fileName, XXXXX)); // How to get the value field ("Source") from the input KV?
My difficulty is the last line, for XXXXX, how do I get the value field ("Source") from the input KV? Any way to "join" or "combine" the input KV's value back to the 'expended' keys, as one key (file_pattern) is expanded to multiple values.
Thank you!
MatchResult.Medata contains the resourceId you are already using but not the GCS path (with wildcards) it matched.
You can achieve what you want using side inputs. To demonstrate this I created the following filesAndSources (as per your comment this could be an input parameter so it can't be hard-coded downstream):
PCollection<KV<String, String>> filesAndSources = p.apply("Create file pattern and source pairs",
Create.of(KV.of("gs://" + Bucket + "/sales/*", "Sales"),
KV.of("gs://" + Bucket + "/events/*", "Events")));
I materialize this into a side input (in this case as Map). The key will be the glob pattern converted into a regex one (thanks to this answer) and the value will be the source string:
final PCollectionView<Map<String, String>> regexAndSources =
filesAndSources.apply("Glob pattern to RegEx", ParDo.of(new DoFn<KV<String, String>, KV<String, String>>() {
#ProcessElement
public void processElement(ProcessContext c) {
String regex = c.element().getKey();
StringBuilder out = new StringBuilder("^");
for(int i = 0; i < regex.length(); ++i) {
final char ch = regex.charAt(i);
switch(ch) {
case '*': out.append(".*"); break;
case '?': out.append('.'); break;
case '.': out.append("\\."); break;
case '\\': out.append("\\\\"); break;
default: out.append(ch);
}
}
out.append('$');
c.output(KV.of(out.toString(), c.element().getValue()));
}})).apply("Save as Map", View.asMap());
Then, after reading the filenames we can use the side input to parse each path to see which is the matching pattern/source pair:
filesAndSources
.apply("Get file names", Keys.create())
.apply(FileIO.matchAll())
.apply(FileIO.readMatches())
.apply(ParDo.of(new DoFn<ReadableFile, KV<String, String>>() {
#ProcessElement
public void processElement(ProcessContext c) {
ReadableFile file = c.element();
String fileName = file.getMetadata().resourceId().toString();
Set<Map.Entry<String,String>> patternSet = c.sideInput(regexAndSources).entrySet();
for (Map.Entry< String,String> pattern:patternSet)
{
if (fileName.matches(pattern.getKey())) {
String source = pattern.getValue();
c.output(KV.of(fileName, source));
}
}
}}).withSideInputs(regexAndSources))
Note that the regex conversion is done when before materializing the side input instead of here to avoid duplicate work.
The output, as expected in my case:
Feb 24, 2019 10:44:05 PM org.apache.beam.sdk.io.FileIO$MatchAll$MatchFn process
INFO: Matched 2 files for pattern gs://REDACTED/events/*
Feb 24, 2019 10:44:05 PM org.apache.beam.sdk.io.FileIO$MatchAll$MatchFn process
INFO: Matched 2 files for pattern gs://REDACTED/sales/*
Feb 24, 2019 10:44:05 PM com.dataflow.samples.RegexFileIO$3 processElement
INFO: key=gs://REDACTED/sales/sales1.csv, value=Sales
Feb 24, 2019 10:44:05 PM com.dataflow.samples.RegexFileIO$3 processElement
INFO: key=gs://REDACTED/sales/sales2.csv, value=Sales
Feb 24, 2019 10:44:05 PM com.dataflow.samples.RegexFileIO$3 processElement
INFO: key=gs://REDACTED/events/events1.csv, value=Events
Feb 24, 2019 10:44:05 PM com.dataflow.samples.RegexFileIO$3 processElement
INFO: key=gs://REDACTED/events/events2.csv, value=Events
Full code.
Related
I am new to Hazelcast jet and I am not getting how to Sink simple java.util.Map to BatchSource?
I have tried below but not seems to be working.
Map<String, Object> data =new HashMap<String, Object>();
data.put("xyz", "abc");
Way 1:
Map<String, Object> am = jetInstance.getMap("abc");
am.putAll(data);
BatchSource batchSource = Sources.map("abc");
I gives error: java.util.HashMap cannot be cast to java.util.Map$Entry
Way 2: BatchSource batchSource = TestSources.items(data);
and same error please help what wrong am I doing I am trying to creating pipeline but not going forward.
I think the problem must be somewhere other than in the code you've shared; the BatchSource definition isn't incorrect. I just did a quick test and the following runs fine (Tested on 5.0 version):
public static void main(String[] args) {
Config config = new Config();
config.getJetConfig().setEnabled(true);
HazelcastInstance hz = Hazelcast.newHazelcastInstance(config);
Map<String,Object> map = hz.getMap("map");
map.put("A", "AnObject");
map.put("B", "AnotherObject");
BatchSource source = Sources.map("map");
Pipeline p = Pipeline.create();
p.readFrom(source)
.writeTo(Sinks.logger());
hz.getJet().newJob(p).join();
}
With output:
INFO: [192.168.86.25]:5701 [dev] [5.0] Start execution of job '0702-23dd-8800-0001', execution 0702-23dd-8801-0001 from coordinator [192.168.86.25]:5701
Oct 25, 2021 9:11:44 AM com.hazelcast.jet.impl.connector.WriteLoggerP
INFO: [192.168.86.25]:5701 [dev] [5.0] [0702-23dd-8800-0001/loggerSink#0] B=AnotherObject
Oct 25, 2021 9:11:44 AM com.hazelcast.jet.impl.connector.WriteLoggerP
INFO: [192.168.86.25]:5701 [dev] [5.0] [0702-23dd-8800-0001/loggerSink#0] A=AnObject
Oct 25, 2021 9:11:44 AM com.hazelcast.jet.impl.MasterJobContext
INFO: [192.168.86.25]:5701 [dev] [5.0] Execution of job '0702-23dd-8800-0001', execution 0702-23dd-8801-0001 completed successfully
(Since a map is unordered in this case the entries are seen in a different order than the way they were added to the map)
I tried to compress Inputmask.js file. Here is my code to work
public class JSFileMinifyTest {
public static void main(String[] args) throws Exception {
String sourceFileName = "D:\\temp\\jquery.inputmask.bundle.js";
String outputFilename = "D:\\temp\\combined.min.js";
com.google.javascript.jscomp.Compiler.setLoggingLevel(Level.INFO);
com.google.javascript.jscomp.Compiler compiler = new com.google.javascript.jscomp.Compiler();
CompilerOptions options = new CompilerOptions();
CompilationLevel.WHITESPACE_ONLY.setOptionsForCompilationLevel(options);
options.setAggressiveVarCheck(CheckLevel.OFF);
options.setRuntimeTypeCheck(false);
options.setCheckRequires(CheckLevel.OFF);
options.setCheckProvides(CheckLevel.OFF);
options.setReserveRawExports(false);
WarningLevel.VERBOSE.setOptionsForWarningLevel(options);
// To get the complete set of externs, the logic in
// CompilerRunner.getDefaultExterns() should be used here.
SourceFile extern = SourceFile.fromCode("externs.js", "function alert(x) {}");
SourceFile jsFile = SourceFile.fromFile(sourceFileName);
compiler.compile(extern, jsFile, options);
for (JSError message : compiler.getWarnings()) {
System.err.println("Warning message: " + message.toString());
}
for (JSError message : compiler.getErrors()) {
System.err.println("Error message: " + message.toString());
}
FileWriter outputFile = new FileWriter(outputFilename);
outputFile.write(compiler.toSource());
outputFile.close();
}
}
But error occured while compressing this js file.
Apr 09, 2018 11:29:18 AM com.google.javascript.jscomp.parsing.ParserRunner parse
INFO: Error parsing D:\temp\jquery.inputmask.bundle.js: Compilation produced 3 syntax errors. (D:\temp\jquery.inputmask.bundle.js#1)
Apr 09, 2018 11:29:18 AM com.google.javascript.jscomp.LoggerErrorManager println
SEVERE: D:\temp\jquery.inputmask.bundle.js:1002: ERROR - Parse error. identifier is a reserved word
static || null !== test.fn && void 0 !== testPos.input ? static && null !== test.fn && void 0 !== testPos.input && (static = !1,
^
Apr 09, 2018 11:29:18 AM com.google.javascript.jscomp.LoggerErrorManager println
SEVERE: D:\temp\jquery.inputmask.bundle.js:1003: ERROR - Parse error. syntax error
maskTemplate += "</span>") : (static = !0, maskTemplate += "<span class='im-static''>");
^
Apr 09, 2018 11:29:18 AM com.google.javascript.jscomp.LoggerErrorManager println
SEVERE: D:\temp\jquery.inputmask.bundle.js:1010: ERROR - Parse error. missing variable name
var maskTemplate = "", static = !1;
^
Apr 09, 2018 11:29:18 AM com.google.javascript.jscomp.LoggerErrorManager printSummary
WARNING: 3 error(s), 0 warning(s)
Error message: JSC_PARSE_ERROR. Parse error. identifier is a reserved word at D:\temp\jquery.inputmask.bundle.js line 1002 : 16
Error message: JSC_PARSE_ERROR. Parse error. syntax error at D:\temp\jquery.inputmask.bundle.js line 1003 : 29
Error message: JSC_PARSE_ERROR. Parse error. missing variable name at D:\temp\jquery.inputmask.bundle.js line 1010 : 39
Has there anyways to skip syntax validations or ignore errors and continue to compress ?
The compiler by default, parses code as 'use strict'. 'static' is a reserved word in strict mode. You can either change the language mode or simply renamed the variable from 'strict' to something else.
The strict mode reserve words are documented here:
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Strict_mode#Paving_the_way_for_future_ECMAScript_versions
Can we store the value returned from omdb function searchOneMovie(moviename) so that it can be displayed as output in JSP page when button is clicked? I am developing a Web-Application where on clicking the button I should get the details of movie as searched by user but by using searchOneMovie() I am getting the output in console not in web page. Please tell how to display searchOneMovie() value in JSP page.
CODE:
public void search(HttpServletResponse response, String moviename) throws OmdbSyntaxErrorException, OmdbConnectionErrorException, OmdbMovieNotFoundException, IOException {
Omdb o = new Omdb();
try {
PrintWriter out = response.getWriter();
out.print(o.searchOneMovie(moviename));
} catch (Exception ignore) {
}
}
OUTPUT: This output is coming on console but i wan't it in webpage please help.
Mar 13, 2017 8:13:51 PM com.omdbapi.RestClient execute
INFO: executing GET http://www.omdbapi.com/?t=star+wars HTTP/1.1
Mar 13, 2017 8:13:52 PM com.omdbapi.Omdb resultToJson
INFO: received {"Title":"Star Wars: Episode IV - A New Hope","Year":"1977","Rated":"PG","Released":"25 May 1977","Runtime":"121 min","Genre":"Action, Adventure, Fantasy","Director":"George Lucas","Writer":"George Lucas","Actors":"Mark Hamill, Harrison Ford, Carrie Fisher, Peter Cushing","Plot":"Luke Skywalker joins forces with a Jedi Knight, a cocky pilot, a wookiee and two droids to save the galaxy from the Empire's world-destroying battle-station, while also attempting to rescue Princess Leia from the evil Darth Vader.","Language":"English","Country":"USA","Awards":"Won 6 Oscars. Another 50 wins & 28 nominations.","Poster":"https://images-na.ssl-images-amazon.com/images/M/MV5BYzQ2OTk4N2QtOGQwNy00MmI3LWEwNmEtOTk0OTY3NDk2MGJkL2ltYWdlL2ltYWdlXkEyXkFqcGdeQXVyNjc1NTYyMjg#._V1_SX300.jpg","Metascore":"92","imdbRating":"8.7","imdbVotes":"963,318","imdbID":"tt0076759","Type":"movie","Response":"True"}
I am trying to create a UDF for importing into Pig that matches a Regex pattern on a date. The Regex has been tested and works accordingly, but I am having trouble with the following code:
package com.date.format;
import java.io.IOException;
import java.util.regex.Matcher;
import java.util.regex.Pattern;
import org.apache.pig.EvalFunc;
import org.apache.pig.data.Tuple;
public class DATERANGE extends EvalFunc<String> {
#Override
public String exec(Tuple arg0) throws IOException {
try
{
String pattern = "(Oct\\W(?:1[5-9]|2[0-3])\\W(?:(?:0?9|10):\\d{2}:\\d{2}|11:00:00))";
Pattern pat = Pattern.compile(pattern);
Matcher match = pat.matcher((String) arg0.get(0));
if(match.find())
{
return match.group(0);
}
else return "none";
}
catch(Exception e)
{
throw new IOException("Caught exception processing input row ", e);
}
}
}
After compiling the above java code and exporting it as a jar and running it inside Hadoop using the following Pig script:
register 'DATEFormat.jar';
ld = LOAD 'dates/date_data_three' AS (date:chararray);
loop = foreach ld generate com.date.format.DATERANGE(date) as d:chararray;
dump loop;
I get the following error:
ERROR 2078: Caught error from UDF: com.date.format.DATERANGE [Caught exception
processing input row ]
org.apache.pig.impl.logicalLayer.FrontendException: ERROR 1066: Unable to open iterator
for alias loop
at org.apache.pig.PigServer.openIterator(PigServer.java:912)
at org.apache.pig.tools.grunt.GruntParser.processDump(GruntParser.java:752)
at org.apache.pig.tools.pigscript.parser.PigScriptParser.parse(PigScriptParser.java:372)
at org.apache.pig.tools.grunt.GruntParser.loadScript(GruntParser.java:566)
at org.apache.pig.tools.grunt.GruntParser.processScript(GruntParser.java:513)
at org.apache.pig.tools.pigscript.parser.PigScriptParser.
Script(PigScriptParser.java:1014)
at org.apache.pig.tools.pigscript.parser.PigScriptParser.parse(PigScriptParser.java:550)
at org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:228)
at org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:203)
at org.apache.pig.tools.grunt.Grunt.run(Grunt.java:66)
at org.apache.pig.Main.run(Main.java:542)
at org.apache.pig.Main.main(Main.java:156)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
Caused by: org.apache.pig.PigException: ERROR 1002: Unable to store alias loop
at org.apache.pig.PigServer.storeEx(PigServer.java:1015)
at org.apache.pig.PigServer.store(PigServer.java:974)
at org.apache.pig.PigServer.openIterator(PigServer.java:887)
... 16 more
The data file contains dates as shown below:
Wed Oct 15 09:26:09 BST 2014
Wed Oct 15 19:26:09 BST 2014
Wed Oct 18 08:26:09 BST 2014
Wed Oct 23 10:26:09 BST 2014
Sun Oct 05 09:26:09 BST 2014
Wed Nov 20 19:26:09 BST 2014
Does anybody know the correct way to implement a Java UDF for Pig that would work with the Regex I have provided?
Thanks
I recommend you to use REGEX_EXTRACT build-in command, this will be very easy instead of writing UDF.
ld = LOAD 'input.txt' AS (date:chararray);
loop = foreach ld generate REGEX_EXTRACT(date,'(Oct\\W(?:1[5-9]|2[0-3])\\W(?:(?:0?9|10):\\d{2}:\\d{2}|11:00:00))',1) as d:chararray;
C = FILTER loop by d is not null;
D = FOREACH C GENERATE $0;
DUMP D;
Output:
(Oct 15 09:26:09)
(Oct 23 10:26:09)
Your Regex UDF also working fine for me. i just copied your input and java code and executed locally. It works perfectly. Please see the below output that i got from your UDF code. I guess you may need to check your classpath are properly set or not.
(Oct 15 09:26:09)
(none)
(none)
(Oct 23 10:26:09)
(none)
(none)
Even better, you could use ToDate:
load your data into filtered_raw_financings_csvs with close_date as a chararray:
financings_csvs = FOREACH filtered_raw_financings_csvs
GENERATE name,
city,
state,
(close_date==''?NULL:ToDate(close_date, 'dd-MMM-yy')) AS close_date
;
Build your date format string as described here:
http://docs.oracle.com/javase/6/docs/api/java/text/SimpleDateFormat.html
This snippet is shown in context here:
http://nathan.vertile.com/blog/2015/04/17/handling-dates-in-hadoop-pig/
Does anyone know how to do the TURN portion of Ice4j? I've managed to code it so that it works when the phone is on WiFi, but not when its on the mobile network.
I'm sending agent info via TCP and then building the connection manually instead of using a signalling process. The TCP connection already works fine, so I don't think its the TCP issue. Maybe I'm building the agent wrong?
I know that you're supposed to use a TURN server if STUN doesn't work, and I provided a large list of public TURN servers, but I might be missing something. Maybe the packets aren't being sent out properly?
Error: (Mostly Failed to send ALLOCATE-REQUEST(0X3))
Sep 11, 2014 3:36:09 PM org.ice4j.ice.Agent createMediaStream
INFO: Create media stream for data
Sep 11, 2014 3:36:09 PM org.ice4j.ice.Agent createComponent
INFO: Create component data.1
Sep 11, 2014 3:36:09 PM org.ice4j.ice.Agent gatherCandidates
INFO: Gather candidates for component data.1
Sep 11, 2014 3:36:09 PM org.ice4j.ice.harvest.HostCandidateHarvester harvest
INFO: End candidate harvest within 160 ms, for org.ice4j.ice.harvest.HostCandidateHarvester, component: 1
Sep 11, 2014 3:36:09 PM org.ice4j.ice.harvest.StunCandidateHarvest sendRequest
INFO: Failed to send ALLOCATE-REQUEST(0x3)[attrib.count=3 len=32 tranID=0x9909DC6648016A67FDD4B2D8] through /192.168.0.8:5001/udp to stun2.l.google.com:19302:5001/udp
Sep 11, 2014 3:36:12 PM org.ice4j.ice.ConnectivityCheckClient processTimeout
INFO: timeout for pair: /fe80:0:0:0:c8ce:5a17:c339:cc40%4:5001/udp -> /fe80:0:0:0:14e8:f3ff:fef3:6a21:6001/udp (data.1), failing.
Sep 11, 2014 3:36:12 PM org.ice4j.ice.ConnectivityCheckClient processTimeout
INFO: timeout for pair: /fe80:0:0:0:380d:2a4c:b350:eea8%8:5001/udp -> /fe80:0:0:0:14e8:f3ff:fef3:6a21:6001/udp (data.1), failing.
Sep 11, 2014 3:36:12 PM org.ice4j.ice.ConnectivityCheckClient processTimeout
INFO: timeout for pair: /192.168.0.8:5001/udp -> /100.64.74.58:6001/udp (data.1), failing.
Sep 11, 2014 3:36:12 PM org.ice4j.ice.ConnectivityCheckClient processTimeout
INFO: timeout for pair: /192.168.0.8:5001/udp -> /100.64.74.58:6001/udp (data.1), failing.
Sep 11, 2014 3:36:12 PM org.ice4j.ice.ConnectivityCheckClient updateCheckListAndTimerStates
INFO: CheckList will failed in a few seconds if nosucceeded checks come
Sep 11, 2014 3:36:17 PM org.ice4j.ice.ConnectivityCheckClient$1 run
INFO: CheckList for stream data FAILED
Sep 11, 2014 3:36:17 PM org.ice4j.ice.Agent checkListStatesUpdated
INFO: ICE state is FAILED
Script (Both the server and the client sides have similar codes to this one):
Agent agent = new Agent();
agent.setControlling(false);
StunCandidateHarvester stunHarv = new StunCandidateHarvester(new TransportAddress("sip-communicator.net", port, Transport.UDP));
StunCandidateHarvester stun6Harv = new StunCandidateHarvester(new TransportAddress("ipv6.sip-communicator.net", port, Transport.UDP));
agent.addCandidateHarvester(stunHarv);
agent.addCandidateHarvester(stun6Harv);
String[] hostnames = new String[] { "130.79.90.150",
"2001:660:4701:1001:230:5ff:fe1a:805f",
"jitsi.org",
"numb.viagenie.ca",
"stun01.sipphone.com",
"stun.ekiga.net",
"stun.fwdnet.net",
"stun.ideasip.com",
"stun.iptel.org",
"stun.rixtelecom.se",
"stun.schlund.de",
"stun.l.google.com:19302",
"stun1.l.google.com:19302",
"stun2.l.google.com:19302",
"stun3.l.google.com:19302",
"stun4.l.google.com:19302",
"stunserver.org",
"stun.softjoys.com",
"stun.voiparound.com",
"stun.voipbuster.com",
"stun.voipstunt.com",
"stun.voxgratia.org",
"stun.xten.com",};
LongTermCredential longTermCredential = new LongTermCredential("guest", "anon");
for (String hostname : hostnames)
agent.addCandidateHarvester(new TurnCandidateHarvester(new TransportAddress(hostname, port, Transport.UDP), longTermCredential));
//Build a stream for agent
IceMediaStream stream = agent.createMediaStream("data");
try {
Component c = agent.createComponent(stream, Transport.UDP, port, port, port+100 );
String response = "";
List<LocalCandidate> remoteCandidates = c.getLocalCandidates();
for(Candidate<?> can : remoteCandidates) {
response += "||" + can.toString();
}
response = "Video||" + agent.getLocalUfrag() + "||" + agent.getLocalPassword() + "||" + c.getDefaultCandidate().toString() + response;
System.out.println("Server >>> " + response);
DataOutputStream outStream = new DataOutputStream(client.getOutputStream());
outStream.write(response.getBytes("UTF-8"));
outStream.flush();
List<IceMediaStream> streams = agent.getStreams();
for(IceMediaStream localStream : streams) {
List<Component> localComponents = localStream.getComponents();
for(Component localComponent : localComponents) {
for(int i = 3; i < info.length; i++) {
String[] detail = info[i].split(" "); //0: Foundation
//1: Component ID
//2: Transport
//3: Priority #
//4: Address (Needed with Port # to create Transport Address)
//5: Port # (Needed with Address to create Transport Address)
//6: -filler: "Type" is next field-
//7: Candidate Type
String[] foundation = detail[0].split(":"); //Turn "Candidate:#" -> "Candidate" and "#". We use "#"
localComponent.addRemoteCandidate(new RemoteCandidate(new TransportAddress(detail[4], Integer.valueOf(detail[5]), Transport.UDP), c, CandidateType.HOST_CANDIDATE, foundation[1], Long.valueOf(detail[3]), null));
}
String[] defaultDetail = info[3].split(" ");
String[] defaultFoundation = defaultDetail[0].split(":");
localComponent.setDefaultRemoteCandidate(new RemoteCandidate(new TransportAddress(defaultDetail[4], Integer.valueOf(defaultDetail[5]), Transport.UDP), c, CandidateType.HOST_CANDIDATE, defaultFoundation[1], Long.valueOf(defaultDetail[3]), null));
}
localStream.setRemoteUfrag(info[1]);
localStream.setRemotePassword(info[2]);
}
agent.startConnectivityEstablishment();
System.out.println("ICEServer <><><> Completed");
I realize now that your list of TURN servers seem to mostly actually be STUN servers (not sure about the first two). They should be added as STUN servers if anything:
agent.addCandidateHarvester(
new StunCandidateHarvester(
new TransportAddress(
InetAddress.getByName('stun.l.google.com'),
19302,
Transport.UDP)));