I know I can get all commits in a project using GET /repos/:owner/:repo/commits
Now I want to get all commits for a certain release of that project.
What should I do?
Judging by your answer to my question, you want the commits made since some tag. This will take a couple steps to complete, first you need to get the SHA for the tag in question. You'll want to use the git references API to get a specific reference. In the specific example that you linked you'll want to do
GET /repos/nasa/mct/git/refs/tags/v1.8b3
And you'll want to get the 'sha' attribute from the object stored in the 'object' attribute of the response object. With the 'sha' attribute, you'll want to use the commits API to list commits starting with that 'sha' so your request will look like this:
GET /repos/nasa/mct/commits?sha=%(sha_from_first_request)s
That will give you 30 commits per-page by default (if I remember correctly), so you should see if adding &per_page=100 to the end helps. I can't tell you exactly how to do this in Java, but I expect you'll be able to use one of the libraries written to interact with the API to make it easier.
Related
I'm working on Java code that checks whether a file exists in the system and whether it's checked out. After these checks it calls the CHECKIN_UNIVERSAL service. This is where it stops. Checking in a new file works just fine, but it's the checking in of an existing file that's giving errors.
The specific error displayed (without making modifications to my original code) is !cscheckinitemexists. A bunch of googling turned up the solution to clear the data binder, yet then it comes up with the error that it cannot retrieve or use the security token.
Here's the code I use to clear and retrieve the data binder:
m_binder.clearResultSets();
m_binder.getLocalData().clear();
m_binder.setEnvironment(new IdcProperties(SharedObjects.getSecureEnvironment()));
What does the rest of your code look like? You can link to a Gist.
Generally, I have run into this due to data pollution (as you stated).
Is there a reason you are using m_binder instead of creating a brand new DataBinder?
After looking at your gist, you are using m_binder (the DataBinder from the service) to execute CHECKIN_UNIVERSAL. Don't do this. Use a separate DataBinder (as you did for the DOC_INFO_BY_NAME service call).
Either use requestBinder or a new DataBinder.
Another way to avoid this issue is to simply not look for the checkout. CHECKIN_UNIVERSAL supports a flag that checks out a content item if it's not already checked out.
Add the flag "isForceCheckout" to your binder, with a value of "1".
I'm using GitHub API to get Commits.
GET /repos/:owner/:repo/commits
https://api.github.com/repos/nasa/mct/commits?branch=incoming&since=2014-08-26T23:43:48Z
For example, owner:nasa project:mct
I want to get all commits in branch:incoming and since that time.But it seems only returns one commit(should be 9 commits).What can I do?
I've got it.I should check Github API more carefully.
You can use pagination to get what you want.
By default,It only displays the default number of items.
Im new to java and working on a simple application that monitor an url and notify me when a table is updated whit new items. Looking at the entire page will not work as there are commercials that change all the time and they would give false positives.
My thought was to fetch the url line by line looking for the elements. For each element I will check to see if the element is already in an arraylist. If not the element is added to the arraylist and a notification is send.
What I need support with is not the exact code but advice if this would be a good approach and if I should store the elements in an array list or if I should use a file instead as there are 2 lines of text in each element.
Also It would be good to get recomandation on what methods and libs there would be good to look at.
Thanks in advance
Sebastian
To check the site it'd probably be more stable to parse the HTML and work with an object representation of the DOM. I've never had to do this but in a question regarding how to do this another user suggested using JTidy, maybe you could have a look at that.
As for storing the information (what you currently do in your ArrayList): this really depends on what you use your application for. If you only want to be notified of changes that occur during the runtime of your program this is perfectly fine. If you want to have the information persist you should find a way to store the information in the file system or database.
I am working with content parsing I executed the sample program for this i have taken a sample link
please visit the below link
http://www.equitymaster.com/stockquotes/sector.asp?sector=0%2CSOFTL&utm_source=top-menu&utm_medium=website&utm_campaign=performance&utm_content=key-sector
or
Click Here
in the above link i parsed the table data and store into java object.
BSE and NSE are not my exact requirement just I am taken sample example. the above link is developed in the tables they are not used id's and classes. in my example I parsed data using XPath
this is my Xpath
/html/body/table[4]/tbody/tr/td/table[2]/tbody/tr[2]/td[2]/font/table[2]
I selected and parsing it is working fine . here is a problem in future if they changed website structure my program will not work for sure. tell me any other way to parse data dynamically and able to store in database. display the results based on the condition even if they changed the webpage structure I used for this JSOUP api for this. Tell me any other ApI's which provide best support for this type of requirement
If you're trying to parse a page without any clear id/class to select your nodes, you have to try and rely on something else. Redefining the whole tree is indeed the weakest way of doing it, if anything is added/changed everything will collapse.
You could try relying on color: //table[#bgcolor="#c9d0e0"], the "GET MORE INFO" field: //table[tr/td//text()="GET MORE INFO"], the "More Info" there is on every line: //table[.//td//text()=" More Info "]...
The idea is to find something ideally unique (if you can't find any unique criteria, table[color condition selecting a few tables][2] is still stronger walking the whole tree), present every time, and use that as an id.
I'm using JavaHL to connect to a 1.6 svn repos. While I managed to list the contents of the repository, I'm not able to get the item history (the comments made on the check ins as well as the dates and the authors).
As far as I see, SVNClient.logMessages is the right method, but the callback method is never been executed. I used Revision.HEAD for the path revision and a revision range object holding Revision.START and Revision.HEAD; the limit is set to 0 (which is no limit according to the documentation). I'm trying to fetch the revision, the date, the author and the comment.
If someone knows about example code on using JavaHL I'm maybe able to find my fault by comparing that code to mine.
BTW: I know about SVNKit, but the management decided not to buy it. Thus I have to use JavaHL, where next-to-no sample programs exist (and the doc will merely list the classes and interfaces without a very detailed description). So, please point me in that direction of SVNKit as this is impossible for me.
Any pointers appreciated.
Gnarf
The issue has been solved. The problem was the call to SVNClient.logMessages(), especially the revision range used.
The start revision had been Revision.START that, according to the documentation, is used to describe the "first existing revision".
The problem disappeared when I used Revision.getInstance(1) instead. As it is reasonable that any item has at least one revision (the initial one) with that number, it should be save to use that.
Hopefully this will save anyone else from spending another two-and-a-half days to figure it out!
Gnarf