I'm trying to get a specific user's tweets into Processing and then have them spoken out using the TTS Library. So far I've managed to get the tweets into Processing, with them printed as I want them. BUT, adding the TTS stuff is where it's proving problematic, considering my novice-level-skills.
What happens at the moment is that I receive the error message:
The method speak(String) in the type TTS is not applicable for the arguments (String[])
Anyone have any ideas? Help would be greatly appreciated. Thanks.
import twitter4j.util.*;
import twitter4j.*;
import twitter4j.management.*;
import twitter4j.api.*;
import twitter4j.conf.*;
import twitter4j.json.*;
import twitter4j.auth.*;
import guru.ttslib.*;
import java.util.*;
TTS tts;
tts = new TTS();
ConfigurationBuilder cb = new ConfigurationBuilder();
cb.setOAuthConsumerKey("XXXX");
cb.setOAuthConsumerSecret("XXXX");
cb.setOAuthAccessToken("XXXX");
cb.setOAuthAccessTokenSecret("XXXX");
java.util.List statuses = null;
Twitter twitter = new TwitterFactory(cb.build()).getInstance();
String userName ="#BBC";
int numTweets = 19;
String[] twArray = new String[numTweets];
try {
statuses = twitter.getUserTimeline(userName);
}
catch(TwitterException e) {
}
for (int i=0; i<statuses.size(); i++) {
Status status = (Status)statuses.get(i);
//println(status.getUser().getName() + ": " + status.getText());
twArray[i] = status.getUser().getName() + ": " + status.getText();
}
println(twArray);
tts.speak(twArray);
The error says it all: the tts.speak() function take a single String value, but you're giving it a String[] array.
In other words, you should only be passing in a single tweet at a time. Instead, you're passing in every tweet at once. The function doesn't know how to handle that, so you get the error.
You need to only pass in a single String value. How you do that depends on what exactly your goal is. You might just pass in the first tweet:
tts.speak(twArray[0]);
Or you might pass in each tweet one at a time:
for(String tweet : twArray){
tts.speak(tweet);
}
Related
We are using Google cloud service AUTOML TABLES for online prediction.
We have created, trained and deployed the model. The model is giving predictions using the Google console. We are trying to integrate this model in our java code.
We are not able to pass “values” attribute as array of strings in payload object in java code. We haven’t found anything for this in documentation.
Please find the links we are using for this:
https://cloud.google.com/automl-tables/docs/samples/automl-tables-predict
Please find the json object in the screenshot.
Please let us know how to pass “values” attribute as array of strings in payload object?
Thanks.
Based from the reference you are following, to be able to populate "values" you need to define it at the main(). You can refer to Class Value.Builder if you need to set Numbers, Null, etc. values.
List<Value> values = new ArrayList<>();
values.add(Value.newBuilder().setStringValue("This is test data.").build());
// add more elements in values as needed
This list values will be used in Row that accepts iterable protobuf value. See Row.newBuilder.addAllValues().
Row row = Row.newBuilder().addAllValues(values).build();
Using these, the payload is complete and a prediction request be built:
ExamplePayload payload = ExamplePayload.newBuilder().setRow(row).build();
PredictRequest request =
PredictRequest.newBuilder()
.setName(name.toString())
.setPayload(payload)
.putParams("feature_importance", "true")
.build();
PredictResponse response = client.predict(request);
Your full prediction code should look like this:
import com.google.cloud.automl.v1beta1.AnnotationPayload;
import com.google.cloud.automl.v1beta1.ExamplePayload;
import com.google.cloud.automl.v1beta1.ModelName;
import com.google.cloud.automl.v1beta1.PredictRequest;
import com.google.cloud.automl.v1beta1.PredictResponse;
import com.google.cloud.automl.v1beta1.PredictionServiceClient;
import com.google.cloud.automl.v1beta1.Row;
import com.google.cloud.automl.v1beta1.TablesAnnotation;
import com.google.protobuf.Value;
import com.google.protobuf.NullValue;
import java.io.IOException;
import java.util.ArrayList;
import java.util.List;
class TablesPredict {
public static void main(String[] args) throws IOException {
// TODO(developer): Replace these variables before running the sample.
String projectId = "your-project-id";
String modelId = "TBL9999999999";
// Values should match the input expected by your model.
List<Value> values = new ArrayList<>();
values.add(Value.newBuilder().setNullValue(NullValue.NULL_VALUE).build());
values.add(Value.newBuilder().setStringValue("blue-colar").build());
values.add(Value.newBuilder().setStringValue("married").build());
values.add(Value.newBuilder().setStringValue("primary").build());
values.add(Value.newBuilder().setStringValue("no").build());
values.add(Value.newBuilder().setNullValue(NullValue.NULL_VALUE).build());
values.add(Value.newBuilder().setStringValue("yes").build());
values.add(Value.newBuilder().setStringValue("yes").build());
values.add(Value.newBuilder().setStringValue("cellular").build());
values.add(Value.newBuilder().setNullValue(NullValue.NULL_VALUE).build());
values.add(Value.newBuilder().setNullValue(NullValue.NULL_VALUE).build());
values.add(Value.newBuilder().setNullValue(NullValue.NULL_VALUE).build());
values.add(Value.newBuilder().setNullValue(NullValue.NULL_VALUE).build());
values.add(Value.newBuilder().setNullValue(NullValue.NULL_VALUE).build());
values.add(Value.newBuilder().setNullValue(NullValue.NULL_VALUE).build());
values.add(Value.newBuilder().setStringValue("unknown").build());
predict(projectId, modelId, values);
}
static void predict(String projectId, String modelId, List<Value> values) throws IOException {
// Initialize client that will be used to send requests. This client only needs to be created
// once, and can be reused for multiple requests. After completing all of your requests, call
// the "close" method on the client to safely clean up any remaining background resources.
try (PredictionServiceClient client = PredictionServiceClient.create()) {
// Get the full path of the model.
ModelName name = ModelName.of(projectId, "us-central1", modelId);
Row row = Row.newBuilder().addAllValues(values).build();
ExamplePayload payload = ExamplePayload.newBuilder().setRow(row).build();
// Feature importance gives you visibility into how the features in a specific prediction
// request informed the resulting prediction. For more info, see:
// https://cloud.google.com/automl-tables/docs/features#local
PredictRequest request =
PredictRequest.newBuilder()
.setName(name.toString())
.setPayload(payload)
.putParams("feature_importance", "true")
.build();
PredictResponse response = client.predict(request);
System.out.println("Prediction results:");
for (AnnotationPayload annotationPayload : response.getPayloadList()) {
TablesAnnotation tablesAnnotation = annotationPayload.getTables();
System.out.format(
"Classification label: %s%n", tablesAnnotation.getValue().getStringValue());
System.out.format("Classification score: %.3f%n", tablesAnnotation.getScore());
// Get features of top importance
tablesAnnotation
.getTablesModelColumnInfoList()
.forEach(
info ->
System.out.format(
"\tColumn: %s - Importance: %.2f%n",
info.getColumnDisplayName(), info.getFeatureImportance()));
}
}
}
}
For testing purposes I used Google's test dataset (gs://cloud-ml-tables-data/bank-marketing.csv) and used the code above to run send prediction.
See test prediction:
I'm trying to generate recommendations using Apache Mahout while using MongoDB to create the datamodel as per the MongoDBDataModel. My code is as follows :
import java.net.UnknownHostException;
import java.util.List;
import org.apache.mahout.cf.taste.common.TasteException;
import org.apache.mahout.cf.taste.impl.model.mongodb.MongoDBDataModel;
import org.apache.mahout.cf.taste.impl.neighborhood.ThresholdUserNeighborhood;
import org.apache.mahout.cf.taste.impl.recommender.GenericItemBasedRecommender;
import org.apache.mahout.cf.taste.impl.recommender.GenericUserBasedRecommender;
import org.apache.mahout.cf.taste.impl.similarity.PearsonCorrelationSimilarity;
import org.apache.mahout.cf.taste.neighborhood.UserNeighborhood;
import org.apache.mahout.cf.taste.recommender.RecommendedItem;
import org.apache.mahout.cf.taste.recommender.UserBasedRecommender;
import org.apache.mahout.cf.taste.similarity.ItemSimilarity;
import org.apache.mahout.cf.taste.similarity.UserSimilarity;
import com.mongodb.MongoException;
public class usingMongo {
public static void main(String[] args) throws UnknownHostException, Mong oException
,TasteException {
final long startTime = System.nanoTime();
MongoDBDataModel model = new MongoDBDataModel("AdamsLaptop", 27017,
"test", "ratings100k", false, false, null);
System.out.println("connected to mongo ");
UserSimilarity UserSim = new PearsonCorrelationSimilarity(model);
UserNeighborhood neighborhood = new ThresholdUserNeighborhood(0.5, UserSim, model);
UserBasedRecommender UserRecommender = new GenericUserBasedRecommender(model, neighborhood, UserSim);
List<RecommendedItem>UserRecommendations = UserRecommender.recommend(1, 3);
for (RecommendedItem recommendation : UserRecommendations) {
System.out.println("You may like movie " + recommendation.getItemID() + " as a user similar to you also rated it " + recommendation.getValue() + " USER");
}
ItemSimilarity ItemSim = new PearsonCorrelationSimilarity(model);//LogLikelihoodSimilarity(model);
GenericItemBasedRecommender ItemRecommender = new GenericItemBasedRecommender(model, ItemSim);
List<RecommendedItem>ItemRecommendations = ItemRecommender.recommend(1, 3);
for (RecommendedItem recommendation : ItemRecommendations) {
System.out.println("You may like movie " + recommendation.getItemID() + " as a user similar to you also rated it " + recommendation.getValue() + " ITEM");
}
final long duration = System.nanoTime() - startTime;
System.out.println(duration);
}
}
I cant see where I've gone wrong but with numerous changes and lots of trial and error the error message remains the same :
Exception in thread "main" java.lang.NullPointerException
at org.apache.mahout.cf.taste.impl.model.mongodb.MongoDBDataModel.getID(MongoDBDataModel.java:743)
at org.apache.mahout.cf.taste.impl.model.mongodb.MongoDBDataModel.buildModel(MongoDBDataModel.java:570)
at org.apache.mahout.cf.taste.impl.model.mongodb.MongoDBDataModel.<init>(MongoDBDataModel.java:245)
at recommender.usingMongo.main(usingMongo.java:24)
Any suggestions? Here's an example of my data within MongoDB :
{ "_id" : ObjectId("56ddf61f5960960c333f3dcb"),"userId" : 1, "movieId" : 292, "rating" : 4, "timestamp" : 847116936 }
I succesfully integrated MongoDB data to mahout.
The structure of the data in mongoDB depends on the kind of Similarity algorithm you use.for eg,
UserSimilarity
MongoDBDataModel datamodel = new MongoDBDataModel("127.0.0.1", 27017, "testing", "ratings", true, true, null);
where the user_id, item_id are integer values, preference are float values and created_at as timestamp
SVDRecommender
the user_id, item_id are MongoDB Objects and preference are float values and created_at as timestamp
The obvious troubleshooting you can do is whether the MongoDB server is running or not. As per the exception it's running. I think the problem lies in your structure of data..
Use user_id instead of userId, item_id instead of itemId, preference instead of rating. I don't know if this will make any difference. I used one of the tutorial online, but can't find it at the moment.
It's working but too slow when I have more than 10000 users with 1000 items.
I think that the problem is that mahout assumes some default values when it comes to some fields that need to reside in your mongoDB the item ID, User ID and preferences that are user_id, item_id and preference so The solution might lie on using another MongoDBDataModel constructor that will give you the possibility to pass as parameters the names of those fields in your mongoDB instance or redesign your Collections Schema.
I hope that makes sense.
I need help with my Java project using Jsoup (if you think there is a more efficient way to achieve the purpose, please let me know). The purpose of my program is to parse certain useful information from different URLs and put it in a text file. I am not an expert in HTML or JavaScript, therefore, it has been difficult for me to code in Java exactly what I want to parse.
In the website that you see in the code below as one of the examples, the information that interests me to parse with Jsoup is everything you can see in the table under “Routing”(Route, Location, Vessel/Voyage, Container Arrival Date, Container Departure Date; = Origin, Seattle SSA Terminal T18, 26 Jun 15 A, 26 Jun 15 A… and so on).
So far, with Jsoup we are only able to parse the title of the website, yet we have been unsuccessful in getting any of the body.
Here is the code that I used, which I got from an online source:
import org.jsoup.Jsoup;
import org.jsoup.nodes.Document;
import org.jsoup.nodes.Element;
import org.jsoup.select.Elements;
public class Jsouptest71115 {
public static void main(String[] args) throws Exception {
String url = "http://google.com/gentrack/trackingMain.do "
+ "?trackInput01=999061985";
Document document = Jsoup.connect(url).get();
String title = document.title();
System.out.println("title : " + title);
String body = document.select("body").text();
System.out.println("Body: " + body);
}
}
Working code:
import org.jsoup.Connection;
import org.jsoup.Jsoup;
import org.jsoup.nodes.Element;
import org.jsoup.select.Elements;
import java.io.IOException;
import java.util.ArrayList;
public class Sample {
public static void main(String[] args) {
String url = "http://homeport8.apl.com/gentrack/blRoutingPopup.do";
try {
Connection.Response response = Jsoup.connect(url)
.data("blNbr", "999061985") // tracking number
.method(Connection.Method.POST)
.execute();
Element tableElement = response.parse().getElementsByTag("table")
.get(2).getElementsByTag("table")
.get(2);
Elements trElements = tableElement.getElementsByTag("tr");
ArrayList<ArrayList<String>> tableArrayList = new ArrayList<>();
for (Element trElement : trElements) {
ArrayList<String> columnList = new ArrayList<>();
for (int i = 0; i < 5; i++) {
columnList.add(i, trElement.children().get(i).text());
}
tableArrayList.add(columnList);
}
System.out.println("Origin/Location: "
+tableArrayList.get(1).get(1));// row and column number
System.out.println("Discharge Port/Container Arrival Date: "
+tableArrayList.get(5).get(3));
} catch (IOException e) {
e.printStackTrace();
}
}
}
Output:
Origin/Location: SEATTLE SSA TERMINAL (T18), WA
Discharge Port/Container Arrival Date: 23 Jul 15 E
You need to utilize document.select("body") select method input to which is CSS selector. To know more about CSS selectors just google it, or Read this. Using CSS selectors you can identify parts of web page body easily.
In your particular case you will have a different problem though, for instance the table you are after is inside an IFrame and if you look at the html of web page you are visiting its(iframe's) url is "http://homeport8.apl.com/gentrack/blRoutingFrame.do", so if you visit this URL directly so that you can access its content you will get an exception which is perhaps some restriction from Server. To get content properly you need to visit two URLs via JSoup, 1. http://homeport8.apl.com/gentrack/trackingMain.do?trackInput01=999061985 and 2. http://homeport8.apl.com/gentrack/blRoutingFrame.do?trackInput01=999061985
For first URL you'll get nothing useful, but for second URL you'll get tables of your interest. The try using document.select("table") which will give you List of tables iterator over this list and find table of your interest. Once you have the table use Element.select("tr") to get a table row and then for each "tr" use Element.select("td") to get table cell data.
The webpage you are visiting didn't use CSS class and id selectors which would have made reading it with jsoup a lot easier so I am afraid iterating over document.select("table") is your best and easy option.
Good Luck.
I am running GWT RPC calls to a GAE server, querying for article objects that are stored in the datastore with JDO, and I am paginating the results by using cursors.
I send an initial RPC call to start the pagination with a "range" of 10 results. I store the query cursor in the memcache, and retrieve it when the user requests for the next page of 10 results. The code that implements this is shown below.
The range is always the same, 10 results. However, some subsequent RPC calls return 2 results, or 12 results. It is very inconsistent. The calls also sometimes return duplicate results.
I have read this Google developers documentation: https://developers.google.com/appengine/docs/java/datastore/queries#Java_Limitations_of_cursors. It mentions that: "Cursors don't always work as expected with a query that uses an inequality filter or a sort order on a property with multiple values. The de-duplication logic for such multiple-valued properties does not persist between retrievals, possibly causing the same result to be returned more than once."
As you can see in the code, I am sorting on a "date" property. This property only has one value.
Can you let me see what I am doing wrong here. Thanks.
This is the code that executes the RPC call on the GAE server:
import java.util.ArrayList;
import java.util.HashMap;
import java.util.Iterator;
import java.util.Map;
import java.util.List;
import java.util.logging.Logger;
import javax.jdo.PersistenceManager;
import javax.jdo.Query;
import javax.servlet.http.HttpSession;
import com.google.appengine.api.datastore.Cursor;
import com.google.appengine.datanucleus.query.JDOCursorHelper;
import com.google.gwt.user.server.rpc.RemoteServiceServlet;
//...
private void getTagArticles(String tag, int range, boolean start) {
PersistenceManager pm = PMF.getNonTxnPm();
ArticleStreamItemSummaryDTO aDTO = null;
ArticleStreamItem aDetached = null;
summaryList = new ArrayList<ArticleStreamItemSummaryDTO>();
String cursorString = null;
session = getThreadLocalRequest().getSession();
UserAccount currentUser = LoginHelper.getLoggedInUser(session, pm);
String cursorID = currentUser.getId().toString() + tag;
if (start) { // The start or restart of the query
CacheSupport.cacheDelete(String.class.getName(), cursorID);
}
Object o = CacheSupport.cacheGet(String.class.getName(), cursorID);
if (o != null && o instanceof String) {
cursorString = (String) o;
}
Query q = null;
try {
q = pm.newQuery(ArticleStreamItem.class);
if (cursorString != null) {
Cursor cursor = Cursor.fromWebSafeString(cursorString);
Map<String, Object> extensionMap = new HashMap<String, Object>();
extensionMap.put(JDOCursorHelper.CURSOR_EXTENSION, cursor);
q.setExtensions(extensionMap);
}
q.setFilter("tag == tagParam");
q.declareParameters("String tagParam");
q.setOrdering("date desc");
q.setRange(0, range);
#SuppressWarnings("unchecked")
List<ArticleStreamItem> articleStreamList = (List<ArticleStreamItem>) q.execute(tag);
if (articleStreamList.iterator().hasNext()) {
Cursor cursor = JDOCursorHelper.getCursor(articleStreamList);
cursorString = cursor.toWebSafeString();
CacheSupport.cacheDelete(String.class.getName(), cursorID);
CacheSupport.cachePutExp(String.class.getName(), cursorID, cursorString, CACHE_EXPIR);
for (ArticleStreamItem a : articleStreamList) {
aDetached = pm.detachCopy(a);
aDTO = aDetached.buildSummaryItem();
summaryList.add(aDTO);
}
}
}
catch (Exception e) {
// e.printStackTrace();
logger.warning(e.getMessage());
}
finally {
q.closeAll();
pm.close();
}
}
The code snippet that I provided in the question above actually works well. The problem arose from the client side. The RPC calls were sometimes being made a few milliseconds from each other, and that was creating the inconsistent behavior that I was seeing in the results returned.
I changed the client side code to make a single RPC call every 5 seconds, and that fixed it.
I have a process in Talend which gets the search result of a page, saves the html and writes it into files, as seen here:
Initially I had a two step process with parsing out the date from the HTML files in Java. Here is the code: It works and writes it to a mysql database. Here is the code which basically does exactly that. (I'm a beginner, sorry for the lack of elegance)
package org.jsoup.examples;
import java.io.*;
import org.jsoup.*;
import org.jsoup.nodes.*;
import org.jsoup.select.Elements;
import java.io.IOException;
public class parse2 {
static parse2 parseIt2 = new parse2();
String companyName = "Platzhalter";
String jobTitle = "Platzhalter";
String location = "Platzhalter";
String timeAdded = "Platzhalter";
public static void main(String[] args) throws IOException {
parseIt2.getData();
}
//
public void getData() throws IOException {
Document document = Jsoup.parse(new File("C:/Talend/workspace/WEBCRAWLER/output/keywords_SOA.txt"), "utf-8");
Elements elements = document.select(".joblisting");
for (Element element : elements) {
// Parse Data into Elements
Elements jobTitleElement = element.select(".job_title span");
Elements companyNameElement = element.select(".company_name span[itemprop=name]");
Elements locationElement = element.select(".locality span[itemprop=addressLocality]");
Elements dateElement = element.select(".job_date_added [datetime]");
// Strip Data from unnecessary tags
String companyName = companyNameElement.text();
String jobTitle = jobTitleElement.text();
String location = locationElement.text();
String timeAdded = dateElement.attr("datetime");
System.out.println("Firma:\t"+ companyName + "\t" + jobTitle + "\t in:\t" + location + " \t Erstellt am \t" + timeAdded );
}
}
}
Now I want to do the process End-to-End in Talend, and I got assured this works.
I tried this (which looks quite shady to me):
Basically I put all imports in "advanced settings" and the code in the "basic settings" section. This importLibrary is thought to load the jsoup parsing library, as well as the mysql connect (i might to the connect with talend tools though).
Obviously this isn't working. I tried to strip the Base Code from classes and stuff and it was even worse. Can you help me how to get the generated .txt files parsed with Java here?
EDIT: Here is the Link to the talend Job http://www.share-online.biz/dl/8M5MD99NR1
EDIT2: I changed the code to the one I tried in JavaFlex. But it didn't work (the import part in the start part of the code, the rest in "body/main" and nothing in "end".
This is a problem related to Talend, in your code, use the complete method names including their packages. For your document parsing for example, you can use :
Document document = org.jsoup.Jsoup.parse(new File("C:/Talend/workspace/WEBCRAWLER/output/keywords_SOA.txt"), "utf-8");