YouTube API too_long error code on short keywords? - java

I feel really silly to not be able to find this answer anywhere. Could someone please direct me to a place or inform me of what the YouTube limits are with regard to keywords?
I found that the limit was once 120 characters, but then I heard it changed in March 2010 to 500 characters so it shouldn't be a problem for me if that's true. It's possible I have another problem on my hands. Maybe someone could help me with that one...
I'm using the YouTube API for Java using direct upload from a client based application. I have tons of videos I'm trying to upload to several different accounts at once (each in a different language). So I'm using threads to accomplish this. I limit the number of concurrent uploads to 10 just to try and play it safe. None of the descriptions will be much longer than 500 characters and because of this error I'm getting, I had the keywords automatically limit themselves to 120 characters (I even tried to limit it to 70 characters and had the same problem). So, I'm not sure at all why this is happening. The error I get from Google is:
SEVERE: null
com.google.gdata.util.InvalidEntryException: Bad Request
<?xml version='1.0' encoding='UTF-8'?>
<errors>
<error>
<domain>yt:validation</domain>
<code>too_long</code>
<location type='xpath'>media:group/media:keywords/text()</location>
</error>
</errors>
I'm also getting an InvalidEntryException as an error code, but I'll deal with that later (I think it has to do with my authentication timing out or something).
Anyway, I don't get this error on every video. In fact, most videos make it just fine. I haven't yet tested to find out which videos are throwing the error (I will do that when I'm finished with this post), but it's beyond me why I'm getting this error. I'll post my code here, but it's pretty much exactly what's on the api documentation, so I don't know whether it'll be much help. I'm guessing there's something fundamental I'm missing.
uploadToYouTube():
private void uploadToYouTube() throws IOException, ServiceException {
TalkContent talkContent = transfer.getTalkContent();
YouTubeService youTubeService = talkContent.getLanguage().getYouTubeService(); //see code for this below...
String title = talkContent.getTitle();
String category = talkContent.getCategory();
String description = talkContent.getDescription();
List<String> tags = talkContent.getTags();
boolean privateVid = true;
VideoEntry newEntry = new VideoEntry();
YouTubeMediaGroup mg = newEntry.getOrCreateMediaGroup();
//Title
mg.setTitle(new MediaTitle());
mg.getTitle().setPlainTextContent(title);
//Description
mg.setDescription(new MediaDescription());
mg.getDescription().setPlainTextContent(description);
//Categories and developer tags
mg.addCategory(new MediaCategory(YouTubeNamespace.CATEGORY_SCHEME, category));
mg.addCategory(new MediaCategory(YouTubeNamespace.DEVELOPER_TAG_SCHEME, "kentcdodds"));
mg.addCategory(new MediaCategory(YouTubeNamespace.DEVELOPER_TAG_SCHEME, "moregoodfoundation"));
//Keywords
mg.setKeywords(new MediaKeywords());
int tagLimit = 70; // characters
int totalTags = 0; //characters
for (String tag : tags) {
if ((totalTags + tag.length()) < tagLimit) {
mg.getKeywords().addKeyword(tag);
totalTags += tag.length();
}
}
//Visible status
mg.setPrivate(privateVid);
//GEO coordinantes
newEntry.setGeoCoordinates(new GeoRssWhere(40.772555, -111.892480));
MediaFileSource ms = new MediaFileSource(new File(transfer.getOutputFile()), transfer.getYouTubeFileType());
newEntry.setMediaSource(ms);
String uploadUrl = "http://uploads.gdata.youtube.com/feeds/api/users/default/uploads";
VideoEntry createdEntry = youTubeService.insert(new URL(uploadUrl), newEntry);
status = Transfer.FINISHEDUP;
}
getYouTubeService():
public YouTubeService getYouTubeService() {
if (youTubeService == null) {
try {
authenticateYouTube();
} catch (AuthenticationException ex) {
JOptionPane.showMessageDialog(null, "There was an authentication error!" + StaticClass.newline + ex, "Authentication Error", JOptionPane.ERROR_MESSAGE);
Logger.getLogger(Language.class.getName()).log(Level.SEVERE, null, ex);
}
}
return youTubeService;
}
authenticateYouTube():
public void authenticateYouTube() throws AuthenticationException {
if (youTubeService == null) {
System.out.println("Authenticating YouTube Service");
youTubeService = new YouTubeService("THENAMEOFMYPROGRAM", "THEDEVELOPERIDHERE");
youTubeService.setUserCredentials(youTubeUsername, youTubePassword);
System.out.println("Authentication of YouTube Service succeeded");
}
}
Any help on this would be great! Also, before I call the uploadToYouTube() method, I print out that the video's being uploaded to YouTube and after the method call I print out that it finished. Can someone explain why those are printed out within moments of one another? I'm not starting a new thread for the uploadToYouTube() method, I'm guessing that in the insert() method on the youTubeService there's a new thread started for the upload. It's a little annoying though because I'm never quite sure at what point the video finishes uploading and if I stop the program before it's through then the video stops uploading.
Anyway! Thanks for reading all of this! I hope someone can help me out!

The solution was really simple! The problem was even though I'm not over the 500 character limit for total tags, but sometimes I was over the limit of 30 characters per tag! I just changed tag adding lines to the following code:
//Keywords
mg.setKeywords(new MediaKeywords());
int totalTagsLimit = 500; // characters
int singleTagLimit = 30; // characters
int totalTags = 0; //characters
for (String tag : tags) {
if ((totalTags + tag.length()) < totalTagsLimit && tag.length() < singleTagLimit) {
mg.getKeywords().addKeyword(tag);
totalTags += tag.length();
}
}

Related

Jsoup in Java return always the same page with different URL - SCRAPING website

public void conectUrl() throws IOException, InterruptedException {
product= new ArrayList<>();
String url = "https://www.continente.pt/stores/continente/pt-pt/public/pages/category.aspx?cat=campanhas#/?page=1&sf=Revelance";
page = Jsoup.connect(url).userAgent("JSoup scraper").get();
//get actual page
Elements paginaAtu=page.getElementsByClass("_actualPage");
paginaAtual=Integer.parseInt(paginaAtu.attr("value"));
//get Total Pages
Elements nextPage=page.getElementsByClass("_actualTotalPages");
numPaginas =Integer.parseInt(nextPage.attr("value"));
for(paginaAtual=1;paginaAtual<numPaginas;paginaAtual++) {
getProductInfo("https://www.continente.pt/stores/continente/pt-pt/public/pages/category.aspx?cat=campanhas#/?page="+paginaAtual+"&sf=Revelance");
}
}
Always return the same resul with different URL. I already searched about jsoup cache , i am not the first person doing this question, however nobody says how to resolve the situation. In theory, JSoup doesn't cache url pages...
I already did the code "sleep" during 30 seconds to load the new URL however still not working, return always the same result.
Anybody can help me? Thank you in advance.

Conversation ID leads to unkown path in graph-api

I have a code that fetches conversations and the messages inside them (a specific number of pages). It works most of the time, but for certain conversations it throws an exception, such as:
Exception in thread "main" com.restfb.exception.FacebookOAuthException: Received Facebook error response of type OAuthException: Unknown path components: /[id of the message]/messages (code 2500, subcode null)
at com.restfb.DefaultFacebookClient$DefaultGraphFacebookExceptionMapper.exceptionForTypeAndMessage(DefaultFacebookClient.java:1192)
at com.restfb.DefaultFacebookClient.throwFacebookResponseStatusExceptionIfNecessary(DefaultFacebookClient.java:1118)
at com.restfb.DefaultFacebookClient.makeRequestAndProcessResponse(DefaultFacebookClient.java:1059)
at com.restfb.DefaultFacebookClient.makeRequest(DefaultFacebookClient.java:970)
at com.restfb.DefaultFacebookClient.makeRequest(DefaultFacebookClient.java:932)
at com.restfb.DefaultFacebookClient.fetchConnection(DefaultFacebookClient.java:356)
at test.Test.main(Test.java:40)
After debugging I found the ID that doesn't work and tried to access it from graph-api, which results in an "unknown path components" error. I also attempted to manually find the conversation in me/conversations and click the next page link in the graph api explorer which also lead to the same error.
Is there a different way to retrieve a conversation than by ID? And if not, could someone show me an example to verify first if the conversation ID is valid, so if there are conversations I can't retrieve I could skip them instead of getting an error. Here's my current code:
Connection<Conversation> fetchedConversations = fbClient.fetchConnection("me/Conversations", Conversation.class);
int pageCnt = 2;
for (List<Conversation> conversationPage : fetchedConversations) {
for (Conversation aConversation : conversationPage) {
String id = aConversation.getId();
//The line of code which causes the exception
Connection<Message> messages = fbClient.fetchConnection(id + "/messages", Message.class, Parameter.with("fields", "message,created_time,from,id"));
int tempCnt = 0;
for (List<Message> messagePage : messages) {
for (Message msg : messagePage) {
System.out.println(msg.getFrom().getName());
System.out.println(msg.getMessage());
}
if (tempCnt == pageCnt) {
break;
}
tempCnt++;
}
}
}
Thanks in advance!
Update: Surrounded the problematic part with a try catch as a temporary solution, also counted the number of occurrences and it only effects 3 out of 53 conversations. I also printed all the IDs, and it seems that these 3 IDs are the only ones that contain a "/" symbol, I'm guessing it has something to do with the exception.
The IDs that work look something like this: t_[text] (sometimes a "." or a ":" symbol) and the ones that cause an exception are always t_[text]/[text]
conv_id/messages is not a valid graph api call.
messages is a field of conversation.
Here is what you do (single call to api):
Connection<Conversation> conversations = facebookClient.fetchConnection("me/conversations", Conversation.class);
for (Conversation conv : conversations.getData()) {
// To get list of messages for given conversation
LinkedList<Message> allConvMessagesStorage = new LinkedList<Message>();
Connection<Message> messages25 = facebookClient.fetchConnection(conv.getId()+"/messages", Message.class);
//Add messages returned
allConvMessagesStorage.addAll(messages25.getData());
//Check if there is next page to fetch
boolean progress = messages25.hasNext();
while(progress){
messages25 = facebookClient.fetchConnectionPage(messages25.getNextPageUrl(), Message.class);
//Append next page of messages
allConvMessagesStorage.addAll(messages25.getData());
progress = messages25.hasNext();
}
}

View all comments on a YouTube video

I am trying to get all the comments on a YouTube video using a Java program. I cannot get them though as it has the "Show More" instead of all the comments. I'm looking for a way to get all the comments or pages of comments that I can go through. I have a video id and things, just need the comments.
I have tried all_comments instead of watch in the URL but it doesn't show all comments still and redirects to watch again.
I have also looked at the YouTube api and can only find how to get comments with their id but I need to get all comments from a video id.
If anyone knows how to do this please tell me.
I have added a 50 rep bounty for whoever can give me a good answer to this.
You need to get comment threads list request for your video and then scroll forward using next page token from the last response:
private static int counter = 0;
private static YouTube youtube;
public static void main(String[] args) throws Exception {
// For Auth details consider:
// https://github.com/youtube/api-samples/blob/master/java/src/main/java/com/google/api/services/samples/youtube/cmdline/Auth.java
// Also don't forget secrets https://github.com/youtube/api-samples/blob/master/java/src/main/resources/client_secrets.json
List<String> scopes = Lists.newArrayList("https://www.googleapis.com/auth/youtube.force-ssl");
Credential credential = Auth.authorize(scopes, "commentthreads");
youtube = new YouTube.Builder(Auth.HTTP_TRANSPORT, Auth.JSON_FACTORY, credential).build();
String videoId = "video_id";
// Get video comments threads
CommentThreadListResponse commentsPage = prepareListRequest(videoId).execute();
while (true) {
handleCommentsThreads(commentsPage.getItems());
String nextPageToken = commentsPage.getNextPageToken();
if (nextPageToken == null)
break;
// Get next page of video comments threads
commentsPage = prepareListRequest(videoId).setPageToken(nextPageToken).execute();
}
System.out.println("Total: " + counter);
}
private static YouTube.CommentThreads.List prepareListRequest(String videoId) throws Exception {
return youtube.commentThreads()
.list("snippet,replies")
.setVideoId(videoId)
.setMaxResults(100L)
.setModerationStatus("published")
.setTextFormat("plainText");
}
private static void handleCommentsThreads(List<CommentThread> commentThreads) {
for (CommentThread commentThread : commentThreads) {
List<Comment> comments = Lists.newArrayList();
comments.add(commentThread.getSnippet().getTopLevelComment());
CommentThreadReplies replies = commentThread.getReplies();
if (replies != null)
comments.addAll(replies.getComments());
System.out.println("Found " + comments.size() + " comments.");
// Do your comments logic here
counter += comments.size();
}
}
Consider api-samples, if you need a sample skeleton project.
Update
The situation when you can't get all the comments can be also caused by the quota limits (at least I faced it):
units/day 50,000,000
units/100seconds/user 300,000
This is not a java, python, js, or whatever language specific rules. If you want to get above the quota, you cant try to apply for higher quota. Though, I would start from controlling your throughput. It's very easy to get above the 100seconds/user quota.
try this it can download all the comments for a given video which i have tested.
https://github.com/egbertbouman/youtube-comment-downloader
python downloader.py --youtubeid YcZkCnPs45s --output OUT
Downloading Youtube comments for video: YcZkCnPs45s
Downloaded 1170 comment(s)
Done!
output is in the JSON format:
{
"text": "+Tony Northrup many thanks for the prompt reply - I'll try that.",
"time": "1 day ago",
"cid": "z13nfbog0ovqyntk322txzjamuensvpch.1455717946638546"
}

Leading zero for outbound asterisk call

Hy all,
I am a newbie in asterisk so don't know much about it. I am currently tangled in an application which makes an outbound call. Previously it wasn't working due to some provisioning issues and then ANI was showing NULL value. But now it's working fine because I used setCallerID function of java.
The problem is that an outbound call is generated successfully to the users for a number say 123, but a leading zero '0' is attached to it and becomes '0123', I am assuming that '0' is for national number which asterisk attached as per defined in asterisk, and my required number is '123', which I want to change from the java code.
I searched a lot of forums/sites including the leading site of voip-info but didn't get success yet. I want to resolve it through java code if possible. Any help would be greatly appreciated.
Update:
Note:
asterisk version is 1.8 , chain ss7 is 2.1
extensions.conf
[gu]
exten =>007775,1,Answer(); this will answer the call
exten =>007775,2,Wait(1); will wait for 1 second
exten =>007775,3,NoOp(Call from ${CALLERID(all)})
exten =>007775,4,AGI(agi://localhost:4573/com.digitania.ivr.gu.domain.ServiceImplDelegate?uniqueId=${UNIQUEID}&isOutbound=${isOutbound}&msisdn=${MSISDN})
exten =>007775,5,Hangup(); end the call
My java code
private static final String PARAM_MSISDN = "MSISDN";
private static final String PARAM_ISOUTBOUND = "isOutbound";
private static final String EXTENSION = "007775";
private static final String CALLER_ID = "7775";
public static int callCount = 0;
public void call(String msisdn, CallListener listener) {
try {
{
OriginateAction originateAction = new OriginateAction();
originateAction.setChannel((new StringBuilder("SS7/").append(msisdn).toString()));
originateAction.setContext("gu");
originateAction.setPriority(new Integer(1));
originateAction.setExten(EXTENSION);
originateAction.setCallerId(CALLER_ID);
originateAction.setVariable(PARAM_MSISDN, msisdn);
originateAction.setVariable(PARAM_ISOUTBOUND, "true");
ManagerResponse originateResponse = managerConnection
.sendAction(originateAction, 30000L);
System.out.println((new StringBuilder("RESPONSE: \n")).append(
originateResponse.getResponse()).toString());
System.out.println(originateResponse.getMessage());
}
} catch (TimeoutException e) {
listener.callTimeOutException();
} catch (Exception e) {
e.printStackTrace();
}
callCount++;
}
It's been a while for me using libss7, but have you tried looking at these options in chan_dahdi.conf?
ss7_called_nai
ss7_calling_nai
ss7_internationalprefix
ss7_nationalprefix
ss7_subscriberprefix
ss7_unknownprefix
networkindicator
Playing a bit with them (and also talking to your provider) should allow you to send the correct numbers.
Hope it helps.

Frontend Instance Hours - How can I decrease the usage?

I'm trying to use the free Google App Engine as a backend for Google Cloud Messages for my next Android app but when I have "finished" writing the server it already uses almost 100% of the free frontend instance hours. The question I have is if and how I can improve this?
The application is a servlet that is called every 15 minutes from a cron job, the servlet downloads and parses 3 RSS feeds and checks if anything has changed since the last call, saves the dates to the database (JDO and memcache, 3 calls) to know when the last running was and if any changes have happend since the last call sends that information out the the connected phones, right now 3 phones are connected, it's just one call to Googles servers. No data is returned from the servlet.
Here is the code
public void doGet(HttpServletRequest request, HttpServletResponse response)
throws IOException
{
boolean sendMessage = false;
String eventsFeedUrl = "http://rss.com";
String newsFeedUrl = "http://rss2.com";
String trafficFeedUrl = "http://rss3.com";
response.setContentType("text/plain");
Message.Builder messageBuilder = new Message.Builder();
String messageData = getFeedMessageData(eventsFeedUrl);
if (!messageData.equals(StringUtils.EMPTY))
{
messageBuilder.addData("event", messageData);
sendMessage = true;
}
messageData = getFeedMessageData(newsFeedUrl);
if (!messageData.equals(StringUtils.EMPTY))
{
messageBuilder.addData("news", messageData);
sendMessage = true;
}
messageData = getFeedMessageData(trafficFeedUrl);
if (!messageData.equals(StringUtils.EMPTY))
{
messageBuilder.addData("traffic", messageData);
sendMessage = true;
}
if (sendMessage)
{
sendMessage(messageBuilder.build(), response, debug);
}
}
private void sendMessage(Message message, HttpServletResponse response, boolean debug)
throws IOException
{
SendResult sendResult = GCMService.send(message, Device.list());
int deleteCount = 0;
for (MessageResult errorResult : sendResult.getErrorResults())
{
if (deleteCount < 200 && (errorResult.getErrorName().equals(Constants.ERROR_NOT_REGISTERED) || errorResult.getErrorName().equals(Constants.ERROR_INVALID_REGISTRATION)))
{
Device.delete(errorResult.getDeviceId());
deleteCount++;
}
}
}
private String getFeedMessageData(String feedUrl)
{
String messageData = StringUtils.EMPTY;
FeedHistory history = FeedHistory.getFeedHistoryItem(feedUrl);
Feed feedContent = RssParser.parse(feedUrl);
if (feedContent != null && feedContent.getFeedItems().size() > 0)
{
if (history == null)
{
history = new FeedHistory(feedUrl);
history.setLastDate(new Date(0));
history.save();
}
for (FeedItem item : feedContent.getFeedItems())
{
if (item.getDate().after(history.getLastDate()))
{
messageData += "|" + item.getCountyId();
}
}
if (!messageData.equals(StringUtils.EMPTY))
{
messageData = new SimpleDateFormat("yyyyMMddHHmmssZ").format(history.getLastDate()) + messageData;
}
history.setLastDate(feedContent.getFeedItem(0).getDate());
history.save();
}
return messageData;
}
The call Device.list() uses memcache so after one call it will be cached, the RSS parser is a simple parser that uses org.w3c.dom.NodeList and javax.xml.parsers.DocumentBuilder. According to the log file I use the same instance for days so there are no problems with instances starting up and taking resources. A normal call to the servlet looks like this in the log,
ms=1480 cpu_ms=653 api_cpu_ms=0 cpm_usd=0.019673
I have some ideas of what to try next, try to do the RSS download calls async to minimize the request time. Move the RSS parsing to a backgroud job. What else can be done? It feels like I have done some fundamental errors with my code here because how can a normal web app work if this servlet can't be called 100 times during 24 hours without consuming 100% of the frontend hours.
/Viktor
Your idle instances hang around for a little while before shutting themselves down. I don't know how long this is, but I'm guessing it's somewhere in the 5-15 minute range. If it is in fact 15 minutes, then your cron job hitting it every 15 minutes will keep it alive indefinitely, so you'll end up using 24 instance hours a day.
You can test this theory by setting your cron job to run every 30 minutes, and see if it halves your instance hour usage.

Categories

Resources