Can I configure, create/update the existing project in Hudson without using its user interface?
Is it possible by changing configuration file or other mean?
The Remote Access API page mentions that you an create/copy job with it.
Remote access API is offered in a REST-like style.
That is, there is no single entry point for all features, and instead they are available under the ".../api/" URL where "..." portion is the data that it acts on.
For example, if your Hudson installation sits at http://deadlock.netbeans.org/hudson/, http://deadlock.netbeans.org/hudson/api/ will give you HTML lists of all available functionality that act on the Hudson root.
On my Hudson, the /api address gives:
Create Job
To create a new job, post config.xml to this URL with query parameter name=JOBNAME.
You'll get 200 status code if the creation is successful, or 4xx/5xx code if it fails.
config.xml is the format Hudson uses to store the project in the file system, so you can see examples of them in /server/path/to/your/hudson/home.
Related
I need to invoke my Informatica PC wf through Java and once the WF is completed or failed give the response to java program to continue it process.
I tried command line but java is not reading response. Looking for detail suggestion on Java and as well as on informatica end
it does not matter if the informatica job is failed or succeeded it need to give the response back to Java
Note that there are many ways to do this--perhaps the easiest is to have Java touch a file somewhere that your workflow is monitoring, and then have the workflow put a result in a database or file that Java can see.
However, a more formal approach is to use web services.
See if your Informatica administrator can enable Web Service Hub and then read up on it in the PowerCenter documentation. I have provided the best information I can here from a document I wrote some years back.
There may be better ways of doing this, but Informatica hasn't changed much over the years, and WSH was the way to go some years back if you wanted to launch a workflow remotely from Java or any other language and monitor its progress.
Once the admin has enabled WSH, you then can navigate to a console (likely at /wsh) that shows various actions that can be performed and various objects.
Here's a quick-and-dirty go at running a workflow using web services from the WSH console:
Open Web Services Hub and navigate to Batch WebService and Integration WebService
Click the Try-It button (the WSDL for the Integration web service is available here as well)
Select the login operation on the left side
Fill in the Domain, Repository, Username, and Password and click Send
Obtain the Session ID in the SOAP response
Select the startWorkflow operation on the left
Provide the SessionId value obtained from the login
Provide FolderName, WorkflowName, RequestMode, DomainName, ServiceName
Click Send
At this point you should receive a successful response.
The web service does not wait until the workflow completes.
Once you can use the web services to control workflows, you can use a Java web service framework like Axis to generate web service client classes for the generic batch-processing.
This approach is covered well in the Informatica documentation
Unfortunately it is a somewhat cumbersome process, but it works. Web services can be invoked as follows from Java:
Create a new Data Integration Service Locator and use that to obtain the Data Integration Interface. This is used to execute all Informatica WS calls. This is based on the service location embedded in the WSDL.
Log in to Informatica and obtain a session ID for the connection
Create a session header, holding the session ID, and place in the Data Integration Interface.
Create a Service Info object that identifies the Informatica Domain Name and Service Name.
Create an object to hold workflow, folder, run mode, and Service Info object
Launch workflow using startWorkflowEx in order to return the run ID
Build a WorkflowRequest object with all of the same workflow information in order to wait for completion
Call waitTillWorkflowComplete in order to block until the Informatica workflow completes
Other features are available, so you should be able to check return codes and such.
You should try something like this.
String cmd="C:\\Informatica\\9.6.1\\clients\\PowerCenterClient\\CommandLineUtilities\\PC\\server\\bin\\pmcmd.exe";
final Process cmdProcess;
cmdProcess = Runtime.getRuntime().exec(new String[]{cmd,""});
OutputStream out = cmdProcess.getOutputStream();
out.write("connect -sv IS_NAME -d DOMAIN_NAME -u USER -p PWD".getBytes());
out.close;
Our java web application uses Alfresco as DMS. The application uses one single systemuser to connect to Alfresco. The application manages the access rights itself with some Business Logic.
Now what I'd like to accomplish, is to be able to use the MS Office URIs to do online editing of Word documents that live in Alfresco. So that's for example an URL that looks like ms-word:ofe|u|https://ourwebapp.com/documents/mydocument.docx
However if we open our documents like this, the user would end up being able to do stuff on Alfresco that we don't want them to do.
Because we want to keep our documents safe and secure, we don't want the users to be able to get the Alfresco documents "directly", but through our app. Opening Alfresco documents directly would mean that each individual user should get a unique Alfresco username/password and we don't have that and we don't want that because we already have lots and lots of documents living in Alfresco.
Surely there are other companies running into this problem? I.e. using their DMS with one single system user?
What I've already tried is to make REST endpoint. A Spring Filter ensures that an authorisation header with username/password is added and the request is forwarded to Alfresco. Then the response from Alfresco is passed back to the user. However this results in a document that's opened in read-only modus at best. Further more, it doesn't seem very secure to set up a connection with the user, using this system user credentials. For all I know, the user will be able to do stuff in Alfresco he isn't supposed to do. Like editing or even viewing other documents. A little bit like this:
There's very little documentation on how the ms-word protocol exactly works, maybe you can point me in the right direction? Or suggest some workarounds I might try out?
For this to work using sharepoint protocol (SPP) you woud have to reimplement the whole protocol server in front of your application since you control the access. There is no free or even available SPP implementation I know of you can (re)use for this.
The Alfresco protocol server may not be an option since you can't / want mirror access control from your app into alfresco. If you get access to a system like Alfresco or Sharepoint using file protocol you will get too much access rights as you already described. By following a concept of an application user you may be locked out from Alfresco concepts for end users if you can't mirror the access logic into alfresco.
Years ago we implemented a dynamic low level access voter to up- or downgrade access inside Alfresco's node service to allow specific permissions based on types and metadata. The same way someone could implement an interface to another system to delegate permission checks based on external data but this would slow down all the systems involved dramatically.
We have a similar requirement since we access documents and data from several enterprise sources including Alfresco from our own business process product having a rule and process based access concept based on cases, processes the documents are involved in- not on folders or document's static ACLs. We use a local service installed on the client partnering with the browser app for downloading, opening and saving back documents after closing the file from a local temporay (checked out) path. Our local client has no idea from Alfresco and is authenticated only against our services using JSON Web Tokens.
So my answer is more a concept not a ready to go solution in the hope to be helpful.
I'm trying to get every project that a given user is authorized to "see". For example, when I log into my GitLab account i'm able to choose (from the dashboard) which project I want to explore.
So, what I need is to get that "list" dynamically with Jgit in order to show it to the user that's logged in my webapp.
Is it possible to do it using JGit? Or should I use the GIT-API?
The GIT-API you mention (https://developer.github.com/v3/) is for GitHub, not GitLab.
For GitLab, you should use https://docs.gitlab.com/ce/api/.
Listing the projects accesible by a user was requested in gitlab-ce issue 33657 and implemented with gitlab-ce merge_request 12596 in commit 050eae8
GET /users/:user_id/projects
You don't need JGit, but a GitLAb API wrapper in Java like gmessner/gitlab4j-api.
I have developed an application that lives in a public Github repository.
The app interacts with systems that require credentials that are currently stored in a properties file.
A jenkins box runs the app periodically.
The problem of saving the project in github without exposing my credentials is succinctly addressed here.
How do I pass my credentials to the jenkins job without exposing the credentials (needed by the app) to my workmates?
In my case I went for a jenkins parametrized build that allowed me to provide string and password type of params.
The params are read in the program via system.getProperty
We are using Java + RESTfull (jersy) webservices and return objects (xml file in svn repo) by opening a stream to an url directly.
However now I need to return a specific revision object (specific revision xml file from svn reporsitory). I am able to use svn api's and checkout that file on to the server locally. Unfortunatly I have no idea how to stream this to an url directly and transport back to client.
Could anyone guide me on this ?
To support requesting arbitrary revisions mod_dav_svn supports a query argument. For examples to request revision 1430000 of the CHANGES file on Subversion's trunk you'd use:
https://svn.apache.org/repos/asf/subversion/trunk/CHANGES?p=1430000
The p stands for peg revision. So you can even specify paths that have been deleted. For instance this URL works:
https://dist.apache.org/repos/dist/dev/subversion/subversion-1.7.14.tar.gz.asc?p=3664
Even though trying the same URL without the peg revision argument doesn't work because the path has been deleted in the HEAD revision:
https://dist.apache.org/repos/dist/dev/subversion/subversion-1.7.14.tar.gz.asc