I look at the docs regarding this and struggling about with how IDriveItemCollectionPage works.
I am currently doing the following, trying to list all children DriveItems of the root drive of a site given its Drive Id with the Java SDK
public ArrayList<DriveItem> getDriveItemChildrenFoldersOfRootDrive(String rootDriveId){
//gets the children folder driveI
IDriveItemCollectionPage driveChildren= mGraphServiceClient.drives().byId(rootDriveId).root().children().buildRequest().get();
ArrayList<DriveItem> results = new ArrayList<DriveItem>();
results.addAll(driveChildren.getCurrentPage());
return results;
}
I realize if getNextPage returns null then there are no more results, but do you have to make another api call to get the next page if there is one?How do I do that with the above setup?
Agree with Brad. I tested at my side and got a success. Here is my code
public static void main(String[] args) {
IGraphServiceClient client = GetClient();
IDriveCollectionPage page = client.drives().buildRequest().get();
List<Drive> drives = page.getCurrentPage();
while(page.getNextPage() != null){
page = page.getNextPage().buildRequest().get();
drives.addAll(page.getCurrentPage());
}
System.out.println(drives.size());
}
Related
I have Java application which is using Selenium Web Driver to crawl/scrape information from Google Play Store applications. I have about 30 links from apps and i have a problem with collecting ALL comments from each application.
For example this application needs a lot of scrolling to load all comments, but some other applications need less/more scrolling.
How can i dynamically load all comments for each app?
Since you have not shared sample code i will share javascript snippet and then provide a C# implementation that you can use in your refer for your Java Selenium project.
Sample JavaScript code
let i=0;
var element = document.querySelectorAll("div>span[jsname='bN97Pc']")[i];
var timer = setInterval(function()
{
console.log(element);
element.scrollIntoView();
i++;
element = document.querySelectorAll("div>span[jsname='bN97Pc']")[i];
if(element===undefined)
clearTimeout(timer);
},500);
Running above code in console once you are on the application page with comments that you have shared will scroll until the end of page while printing out each comment on the console.
Sample code with Selenium C# bindings :
static void Main(string[] args)
{
ChromeDriver driver = new ChromeDriver();
driver.Navigate().GoToUrl("https://play.google.com/store/apps/details?id=com.plokia.ClassUp&hl=en&showAllReviews=true");
ExtractComments(driver);
driver.Quit();
}
private static void ExtractComments(ChromeDriver driver,int startingIndex=0)
{
IEnumerable<IWebElement> comments = driver.FindElementsByCssSelector(#"div>span[jsname='bN97Pc']");
if (comments.Count() <= startingIndex)
return; //no more new comments hence return.
if (startingIndex > 0)
comments = comments.Skip(startingIndex); //skip already processed elements
//process located comments
foreach (var comment in comments)
{
string commentText = comment.Text;
Console.WriteLine(commentText);
(driver as IJavaScriptExecutor).ExecuteScript("arguments[0].scrollIntoView()", comment);
Thread.Sleep(250);
startingIndex++;
}
Thread.Sleep(2000); // Let more comments load once we have consumed existing
ExtractComments(driver,startingIndex); //Recursively call self to process any further comments that have been loaded after scrolling
}
Hope this helps.
I'm currently working with the OPC UA Foundation Java Stack, without any additional SDK's.
I am unable to implement subscriptions with monitored items and to get the change notifications via the publish response. I am new to Java, any help regarding this would be really helpful. Thank you.
This is C#, not Java but you should be able to translate it. I hope it helps.
if (this.subscription == null)
{
this.subscription = new Opc.Ua.Client.Subscription(this.session.DefaultSubscription)
{
PublishingInterval = this.config.ReportingInterval,
TimestampsToReturn = TimestampsToReturn.Both
};
this.session.AddSubscription(subscription);
subscription.Create();
}
item = new MonitoredItem(subscription.DefaultItem)
{
StartNodeId = new NodeId(property.Identifier, this.config.NamespaceId),
SamplingInterval = this.config.SamplingInterval,
QueueSize = this.config.QueueSize,
};
subscription.AddItem(item);
subscription.ApplyChanges()
I am working on a small app for myself and I just don't understand why my code is working in Eclipse but not on my phone using Android Studio.
public static ArrayList<Link> getLinksToChoose(String searchUrl) {
ArrayList<Link> linkList = new ArrayList<Link>();
try {
System.out.println(searchUrl);
Document doc = Jsoup.connect(searchUrl).timeout(3000).userAgent("Chrome").get();
Elements links = doc.select("tr");
links.remove(0);
Elements newLinks = new Elements();
for(Element link : links) {
Link newLink = new Link(getURL(link),getName(link),getLang(link));
linkList.add(newLink);
}
} catch(IOException e){
e.printStackTrace();
}
return linkList;
}
The problem is I can't even get the Document. I always get an httpurlconnectionimpl in the line where I try to get the html doc. I have read a bit about Jsoup in Android. Some people suggest using AsyncTask but it doesn't seem like that would solve my problem.
The loading of the content must happen outside the main thread, e.g. in an AsyncTask.
I am working on a android news app which gets news from google news rss feed. I am currently getting news and showing it to the user in my app. But I want to show notification to the user when new news appears on google rss. I have no idea how to do it as I am new to android and could not find anything relevant on google. Thank you.
Here is my code I have done so far
internal static List<FeedItem> GetFeedItems(string url)
{
List<FeedItem> feedItemsList = new List<FeedItem>();
try
{
HttpClient wc = new HttpClient();
var html = wc.GetStringAsync(new Uri(url)).Result;
XElement xmlitems = XElement.Parse(html);
// We need to create a list of the elements
List<XElement> elements = xmlitems.Descendants("item").ToList();
// Now we're putting the informations that we got in our ListBox in the XAML code
// we have to use a foreach statment to be able to read all the elements
// Description , Link , Title are the attributes in the RSSItem class that I've already added
List<FeedItem> aux = new List<FeedItem>();
foreach (XElement rssItem in elements)
{
FeedItem rss = new FeedItem();
rss.Description = rssItem.Element("description").Value;
rss.Link = rssItem.Element("link").Value;
rss.Title = rssItem.Element("title").Value;
feedItemsList.Add(rss);
}
}
catch (Exception)
{
throw;
}
return feedItemsList;
}
Use parse's push notification,it's very easy to use and has great documents.
https://parse.com
go for push notification service. its the only way to get the notification .
follow this tutorial... http://www.androidhive.info/2012/10/android-push-notifications-using-google-cloud-messaging-gcm-php-and-mysql/
Notification manager is what you want here.
see this http://developer.android.com/guide/topics/ui/notifiers/notifications.html ?
A simple google search also shows tutorials for the same
Very very new to programming,i.e its my 2nd day. I am looking at finance webpage, and am trying to extract the stock symbols from the webpage. Using the source code from the webpage id like a list that looks like ADK-A,AEH,AED, etc..., which is a list of the symbols as they appear on the webpage and browser generated source code.
Looking at the source code via Chrome's browser you can see the stock symbols, but using java even though I get some of the source code, every way i try the stock symbols and plenty of other code are never generated.
I have tried implementations using URL class, URLConnection class, and the HtmlUnit class. I dont know much but im guessing this part of the source is generated by some sort of javascript?? I figured working with Htmlunit would help as supposedly it can handle scripts? It didnt at least the way I am using it. Anyways this is what i tried
private static String name1 = "http://www.quantumonline.com/pfdtable.cfm?Type=TaxAdvPfds&SortColumn=Company&SortOrder=ASC";
//Implementation 1
public static void main (String[] args) throws IOException {
URL thisUrl = new URL(name1);
BufferedReader thisUrlBufferedReader = new BufferedReader (new InputStreamReader(thisUrl.openStream()));
String currentline;
while( (currentline = thisUrlBufferedReader.readLine()) != null) {
if ((currentline.contains("href")) == true) {
System.out.println(currentline);
}
}
}
//Implementation 2. My understading of fudging with addRequestProperty of a URLConnection, was to make sure my that the website wasnt restricting me based on my user-agent, I
honestly dont really know what it does, but i tried with and without, didnt help
public static void main (String[] args) throws IOException {
URL thisUrl = new URL(name1);
URLConnection thisUrlConnect = thisUrl.openConnection();
thisUrlConnect.addRequestProperty("User-Agent", "the user agent i got from http://whatsmyuseragent.com/");
InputStream input = thisUrlConnect.getInputStream();
BufferedReader thisUrlBufferedReader = new BufferedReader (new InputStreamReader (input));
String currentline;
while( (currentline = thisUrlBufferedReader.readLine()) != null) {
System.out.println(currentline);
}
}
//Implementation 3 i also used WebClient(BrowserVersion.CHROME) plus all the other versions
//nothing worked
public static void main(String[] args) throws Exception {
WebClient webClient = new WebClient();
HtmlPage page = webClient.getPage(name1);
System.out.println(page.asXml());
}
}
Anyways if anyone has any ideas im all ears. THANKS!!!