I want to access the remote repo files of all branches to analyze the committed code without cloning to the local through java. How can I achieve this one and what is the procedure if there any way to do?
Thanks in advance.
Try scm4j-vcs-git:
public static final String WORKSPACE_DIR = System.getProperty("java.io.tmpdir") + "git-workspaces";
...
IVCSWorkspace workspace = new VCSWorkspace(WORKSPACE_DIR);
String repoUrl = "https://github.com/MyUser/MyRepo";
IVCSRepositoryWorkspace repoWorkspace = workspace.getVCSRepositoryWorkspace(repoUrl);
IVCS vcs = new GitVCS(repoWorkspace);
vcs.setCredentials("user", "password"); // if necessary
vcs.getBranchesList();
Related
String searchFile = "fileName.txt";
GraphServiceClient graphClient = (GraphServiceClient) GraphServiceClient.builder().authenticationProvider(authProvider).buildClient();
User me = graphClient.me().buildRequest().get();
ISiteRequestBuilder siteReq = graphClient.sites("sharepointSiteId");
IDriveRequestBuilder driveReq = siteReq.drive();
IDriveItemRequestBuilder driveRootReq = driveReq.root();
IDriveItemRequestBuilder source = driveRootReq.itemWithPath("sourcePath");
IDriveItemRequestBuilder destination = driveRootReq.itemWithPath("destinationPath");
IDriveItemCollectionRequestBuilder childrenReq = source.children();
String sourceId = source.buildRequest().get().id;
String destinationId = destination.buildRequest().get().id;
IDriveItemSearchCollectionPage searchResult = source.search(searchFile).buildRequest().get();
DriveItem fileResult = null;
for(DriveItem driveItem : searchResult.getCurrentPage()){
fileResult = driveItem;
}
if(fileResult!=null){
ItemReference parentReference = new ItemReference();
parentReference.id = destinationId;
driveReq.items(fileResult.id).copy(searchFile, parentReference).buildRequest().post();
}
I have this Java code to search a file on a sharepoint site location and copy it to another one.
It works ok but it has a bug, I cannot seem to be able to limit the search only to the source path and instead the search the whole site for the files with that name, so if I got multiple files with that name in the site it will bring them all in the results and that can mess up my copy.
Any one can help me solve this?
PS. if you have any pointers also to optimise this code they are also welcome.
Recently, I'm trying to develop something use Bigtable emulator with java(Spring Boot) on IntelliJ IDEA tool.
What I have done:
Bigtable emulator works well on my computer (MacOs 10.15.6).
"cbt" works normally with Bigtable emulator running on my mac as somethings like this.
I've checked that running Bigtable emulator doesn't need real gcloud credential.
I write a unit test on IEDA like this works fine.
I have added environment variable in setting like this:
My unit test code:
I. Connect init:
Configuration conf;
Connection connection = null;
conf = BigtableConfiguration.configure("fake-project", "fake-instance");
String host = "localhost";
String port = "8086";
II. Constant data going to write into table.
final byte[] TABLE_NAME = Bytes.toBytes("Hello-Bigtable");
final byte[] COLUMN_FAMILY_NAME = Bytes.toBytes("cf1");
final byte[] COLUMN_NAME = Bytes.toBytes("greeting");
final String[] GREETINGS = {
"Hello World!", "Hello Cloud Bigtable!", "Hello!!"
};
III. Connecting: (Duplicated to I.Connect init.)
Configuration conf;
Connection connection = null;
conf = BigtableConfiguration.configure("fake-project", "fake-instance");
String host = "localhost";
String port = "8086";
III. Connecting: (Edited)
if(!Strings.isNullOrEmpty(host)){
conf.set(BigtableOptionsFactory.BIGTABLE_HOST_KEY, host);
conf.set(BigtableOptionsFactory.BIGTABLE_PORT_KEY,port);
conf.set(BigtableOptionsFactory.BIGTABLE_USE_PLAINTEXT_NEGOTIATION, "true");
}
connection = BigtableConfiguration.connect(conf);
IV. Write & Read data:
Admin admin = connection.getAdmin();
Table table = connection.getTable(TableName.valueOf(TABLE_NAME));
if(!admin.tableExists(TableName.valueOf(TABLE_NAME))){
HTableDescriptor descriptor = new HTableDescriptor(TableName.valueOf(TABLE_NAME));
descriptor.addFamily(new HColumnDescriptor(COLUMN_FAMILY_NAME));
System.out.print("Create table " + descriptor.getNameAsString());
admin.createTable(descriptor);
}
for (int i = 0; i < GREETINGS.length; i++) {
String rowKey = "greeting" + i;
Put put = new Put(Bytes.toBytes(rowKey));
put.addColumn(COLUMN_FAMILY_NAME, COLUMN_NAME, Bytes.toBytes(GREETINGS[i]));
table.put(put);
}
Scan scan = new Scan();
ResultScanner scanner = table.getScanner(scan);
for (Result row : scanner) {
byte[] valueBytes = row.getValue(COLUMN_FAMILY_NAME, COLUMN_NAME);
System.out.println('\t' + Bytes.toString(valueBytes));
}
V. Output
Hello World!
Hello Cloud Bigtable!
Hello!!
Problem came after I get this code to my project.
When I use 'debug' to run the code.
I get somethings like this
when it trying to connect bigtable:
Seems that it can't new instance base on the config i create.
Eventually, it shows me an error like
Could not find an appropriate constructor for com.google.cloud.bigtable.hbase1_x.BigtableConnection
P.S. I have tried to use command running IntelliJ IDEA. Reason I doing so is because I missing environment variable when I using unit test.
In my .zshrc:
My CMD tool is iTerm2 with oh-myzsh.
Anythings is help!!!!
Thanks lots.
It seems that you miss the constructor for the BigtableConnection: BigtableConnection(org.apache.hadoop.conf.Configuration conf)
I would suggest you trying to create a Connection object by following the steps mentioned in Google Documentation
private static Connection connection = null;
public static void connect() throws IOException {
Configuration config = BigtableConfiguration.configure(PROJECT_ID, INSTANCE_ID);
// Include the following line if you are using app profiles.
// If you do not include the following line, the connection uses the
// default app profile.
config.set(BigtableOptionsFactory.APP_PROFILE_ID_KEY, APP_PROFILE_ID);
connection = BigtableConfiguration.connect(config);
}
I am planning to integrate the TFS with another application using websevice.
I am new to TFS.so I downloaded the TFS Java SDK 2010.I have been writing s sample program to checkin file into TFS. but not successful. On internet also not much helpful post for Java side SDK samples.
Below is the code I have written:-
public static void main(String[] args) {
// TODO Auto-generated method stub
TFSTeamProjectCollection tpc = SnippetSettings.connectToTFS(); //got the connection to TFS
VersionControlClient vcc = tpc.getVersionControlClient();
//WorkspaceInfo wi = Workstation.Current.GetLocalWorkspaceInfo(Environment.CurrentDirectory);
//vcc.get
String[] paths =new String[1];
paths[0]="D:\\Tools\testfile.txt"; //wants to checkin this local file
Workspace ws = vcc.createWorkspace(null,"Testworkspacename3", null, "","Testcomment",null, null); // this is workspace created at path local C:\ProgramData\Microsoft Team Foundation Local Workspaces
int item = ws.pendAdd(paths, true, null, LockLevel.NONE, GetOptions.GET_ALL, PendChangesOptions.GET_LATEST_ON_CHECKOUT); // this line gives me 0 count. so this is problematic . 0 means nothing is being added.
PendingSet pd = ws.getPendingChanges();
PendingChange[] pendingChanges = pd.getPendingChanges();
ws.checkIn(pendingChanges, "samashti comment");
Project project = tpc.getWorkItemClient().getProjects().get(SnippetSettings.PROJECT_NAME);
System.out.println();
Please help here...what is the wrong here. Can some one provide me correct working sample for new file checkin and existing file checkin using JAVA.
Just refer these steps below:
Connect to team project collection
Get version control client
Create a new workspace
Add file to workspace
Get pending changes
Check in pending changes
Below are some links about TFS SDK for JAVA for your reference:
https://github.com/gocd/gocd/blob/master/tfs-impl/src/com/thoughtworks/go/tfssdk/TfsSDKCommand.java
https://github.com/jenkinsci/tfs-plugin/blob/master/src/main/java/hudson/plugins/tfs/commands/NewWorkspaceCommand.java
Please see the code snippet for creating and mapping workspace as per TFS-SDK-14.0.3
public static Workspace createAndMapWorkspace(final TFSTeamProjectCollection tpc) {
final String workspaceName = "SampleVCWorkspace" + System.currentTimeMillis(); //$NON-NLS-1$
Workspace workspace = null;
// Get the workspace
workspace = tpc.getVersionControlClient().tryGetWorkspace(ConsoleSettings.MAPPING_LOCAL_PATH);
// Create and map the workspace if it does not exist
if (workspace == null) {
workspace = tpc.getVersionControlClient().createWorkspace(
null,
workspaceName,
"Sample workspace comment", //$NON-NLS-1$
WorkspaceLocation.SERVER,
null,
WorkspacePermissionProfile.getPrivateProfile());
// Map the workspace
final WorkingFolder workingFolder = new WorkingFolder(
ConsoleSettings.MAPPING_SERVER_PATH,
LocalPath.canonicalize(ConsoleSettings.MAPPING_LOCAL_PATH));
workspace.createWorkingFolder(workingFolder);
}
System.out.println("Workspace '" + workspaceName + "' now exists and is mapped"); //$NON-NLS-1$ //$NON-NLS-2$
return workspace;
}
I get the following error
java.lang.IllegalStateException: Unable to determine the default workspace location. Check your OSGi-less platform configuration of the plugin or datatools workspace path.
which makes little sense.
Reports are created using the BIRT designer within Eclipse, and we are using code to covert the reports in to PDF.
the code looks something like
final EngineConfig config = new EngineConfig();
config.setBIRTHome("./birt");
Platform.startup(config);
final IReportEngineFactory factory = (IReportEngineFactory) Platform
.createFactoryObject(IReportEngineFactory.EXTENSION_REPORT_ENGINE_FACTORY);
final HTMLRenderOption ho = new HTMLRenderOption();
ho.setImageHandler(new HTMLCompleteImageHandler());
config.setEmitterConfiguration(RenderOption.OUTPUT_FORMAT_HTML, ho);
// Create the engine.
this.engine = factory.createReportEngine(config);
final IReportRunnable report = this.engine.openReportDesign(reportName);
final IRunAndRenderTask task = this.engine.createRunAndRenderTask(report);
final RenderOption options = new HMTLRenderOption();
options.setOutputFormat(HTMLRenderOption.OUTPUT_FORMAT_PDF);
options.setOutputFormat("pdf");
final String output = reportName.replaceFirst(".rptdesign", ".xls");
final String output = name.replaceFirst(".rptdesign", "." + HTMLRenderOption.OUTPUT_FORMAT_PDF);
options.setOutputFileName( outputReporttName);
task.setRenderOption(options);
// Run the report.
task.run();
but it seems during the task.run() method, the system throws the error.
This needs to be able to run standalone, without the need of eclipse, and hopped thatt he setting of BIRT home would make it happy, but these seems to be some other connection profile i am unaware of and probably don't need.
The full error :
07-Jan-2013 14:55:31 org.eclipse.datatools.connectivity.internal.ConnectivityPlugin log
SEVERE: Unable to determine the default workspace location. Check your OSGi-less platform configuration of the plugin or datatools workspace path.
07-Jan-2013 14:55:31 org.eclipse.birt.report.engine.api.impl.EngineTask handleFatalExceptions
SEVERE: An error happened while running the report. Cause:
java.lang.IllegalStateException: Unable to determine the default workspace location. Check your OSGi-less platform configuration of the plugin or datatools workspace path.
at org.eclipse.datatools.connectivity.internal.ConnectivityPlugin.getDefaultStateLocation(ConnectivityPlugin.java:155)
at org.eclipse.datatools.connectivity.internal.ConnectivityPlugin.getStorageLocation(ConnectivityPlugin.java:191)
at org.eclipse.datatools.connectivity.internal.ConnectionProfileMgmt.getStorageLocation(ConnectionProfileMgmt.java:1060)
at org.eclipse.datatools.connectivity.oda.profile.internal.OdaProfileFactory.defaultProfileStoreFile(OdaProfileFactory.java:170)
at org.eclipse.datatools.connectivity.oda.profile.OdaProfileExplorer.defaultProfileStoreFile(OdaProfileExplorer.java:138)
at org.eclipse.datatools.connectivity.oda.profile.OdaProfileExplorer.loadProfiles(OdaProfileExplorer.java:292)
at org.eclipse.datatools.connectivity.oda.profile.OdaProfileExplorer.getProfileByName(OdaProfileExplorer.java:537)
at org.eclipse.datatools.connectivity.oda.profile.provider.ProfilePropertyProviderImpl.getConnectionProfileImpl(ProfilePropertyProviderImpl.java:184)
at org.eclipse.datatools.connectivity.oda.profile.provider.ProfilePropertyProviderImpl.getDataSourceProperties(ProfilePropertyProviderImpl.java:64)
at org.eclipse.datatools.connectivity.oda.consumer.helper.ConnectionPropertyHandler.getEffectiveProperties(ConnectionPropertyHandler.java:123)
at org.eclipse.datatools.connectivity.oda.consumer.helper.OdaConnection.getEffectiveProperties(OdaConnection.java:826)
at org.eclipse.datatools.connectivity.oda.consumer.helper.OdaConnection.open(OdaConnection.java:240)
at org.eclipse.birt.data.engine.odaconsumer.ConnectionManager.openConnection(ConnectionManager.java:165)
at org.eclipse.birt.data.engine.executor.DataSource.newConnection(DataSource.java:224)
at org.eclipse.birt.data.engine.executor.DataSource.open(DataSource.java:212)
at org.eclipse.birt.data.engine.impl.DataSourceRuntime.openOdiDataSource(DataSourceRuntime.java:217)
at org.eclipse.birt.data.engine.impl.QueryExecutor.openDataSource(QueryExecutor.java:407)
at org.eclipse.birt.data.engine.impl.QueryExecutor.prepareExecution(QueryExecutor.java:317)
at org.eclipse.birt.data.engine.impl.PreparedQuery.doPrepare(PreparedQuery.java:455)
at org.eclipse.birt.data.engine.impl.PreparedDataSourceQuery.produceQueryResults(PreparedDataSourceQuery.java:190)
at org.eclipse.birt.data.engine.impl.PreparedDataSourceQuery.execute(PreparedDataSourceQuery.java:178)
at org.eclipse.birt.data.engine.impl.PreparedOdaDSQuery.execute(PreparedOdaDSQuery.java:145)
at org.eclipse.birt.report.data.adapter.impl.DataRequestSessionImpl.execute(DataRequestSessionImpl.java:624)
at org.eclipse.birt.report.engine.data.dte.DteDataEngine.doExecuteQuery(DteDataEngine.java:152)
at org.eclipse.birt.report.engine.data.dte.AbstractDataEngine.execute(AbstractDataEngine.java:267)
at org.eclipse.birt.report.engine.executor.ExecutionContext.executeQuery(ExecutionContext.java:1939)
at org.eclipse.birt.report.engine.executor.QueryItemExecutor.executeQuery(QueryItemExecutor.java:80)
at org.eclipse.birt.report.engine.executor.TableItemExecutor.execute(TableItemExecutor.java:62)
at org.eclipse.birt.report.engine.internal.executor.dup.SuppressDuplicateItemExecutor.execute(SuppressDuplicateItemExecutor.java:43)
at org.eclipse.birt.report.engine.internal.executor.wrap.WrappedReportItemExecutor.execute(WrappedReportItemExecutor.java:46)
at org.eclipse.birt.report.engine.internal.executor.l18n.LocalizedReportItemExecutor.execute(LocalizedReportItemExecutor.java:34)
at org.eclipse.birt.report.engine.layout.html.HTMLBlockStackingLM.layoutNodes(HTMLBlockStackingLM.java:65)
at org.eclipse.birt.report.engine.layout.html.HTMLPageLM.layout(HTMLPageLM.java:92)
at org.eclipse.birt.report.engine.layout.html.HTMLReportLayoutEngine.layout(HTMLReportLayoutEngine.java:100)
at org.eclipse.birt.report.engine.api.impl.RunAndRenderTask.doRun(RunAndRenderTask.java:180)
at org.eclipse.birt.report.engine.api.impl.RunAndRenderTask.run (RunAndRenderTask.java:77)
has anyone seen this error and can point me in the right direction ?
When I had this issue then I tried two things. The first thing solved the error but then I just got to the next error.
The first thing I tried was setting the setenv.sh file to have the following line:
export CATALINA_OPTS="$CATALINA_OPTS -Djava.io.tmpdir=/opt/local/share/tomcat/apache-tomcat-8.0.8/temp/tmpdir -Dorg.eclipse.datatools_workspacepath=/opt/local/share/tomcat/apache-tomcat-8.0.8/temp/tmpdir/workspace_dtp"
This solution worked after I made the tmpdir and the workspace_dtp directories in my local tomcat server. This was done in response to the guidance here.
However, I just got to the next error, which was a connection profile error. I can look into it again if you need. I know how to replicate the issue.
The second thing I tried ended up solving the issue completely and had to do with our report designer selecting the wrong type of datasource in the report design process. See my post on the Eclipse BIRT forums here for the full story: post.
Basically, the report type was set to "JDBC Database Connection for Query Builder" when it should have been set to "JDBC Data Source." See the picture for reference:
Here I give you a tip that save me from that pain :
just launch eclipse with "-clean" option after installing BIRT plugins.
To be clear, my project was built from BIRT maven dependencies, and so should not use eclipse dependencies to run (except for designing reports), but ... i think there was a conflict somewhere ... especially with org.eclipse.datatools.connectivity_1.2.4.v201202041105.jar
For global understanding, you should follow the migration guide :
http://wiki.eclipse.org/Birt_3.7_Migration_Guide#Connection_Profiles
It helps using a connection profile to externalize datasource parameters.
So it's not required if you define JDBC parameters directly in report design.
I used this programmatic way to initialize worskpace directory :
#Override
public void initializeEngine() throws BirtException {
// define eclipse datatools workspace path (required)
String workspacePath = setDataToolsWorkspacePath();
// set configuration
final EngineConfig config = new EngineConfig();
config.setLogConfig(workspacePath, Level.WARNING);
// config.setResourcePath(getSqlDriverClassJarPath());
// startup OSGi framework
Platform.startup(config); // really needed ?
IReportEngineFactory factory = (IReportEngineFactory) Platform
.createFactoryObject(IReportEngineFactory.EXTENSION_REPORT_ENGINE_FACTORY);
engine = factory.createReportEngine(config);
engine.changeLogLevel(Level.WARNING);
}
private String setDataToolsWorkspacePath() {
String workspacePath = System.getProperty(DATATOOLS_WORKSPACE_PATH);
if (workspacePath == null) {
workspacePath = FilenameUtils.concat(SystemUtils.getJavaIoTmpDir().getAbsolutePath(), "workspace_dtp");
File workspaceDir = new File(workspacePath);
if (!workspaceDir.exists()) {
workspaceDir.mkdir();
}
if (!workspaceDir.canWrite()) {
workspaceDir.setWritable(true);
}
System.setProperty(DATATOOLS_WORKSPACE_PATH, workspacePath);
}
return workspacePath;
}
I also needed to force datasource parameters at runtime this way :
private void generateReportOutput(InputStream reportDesignInStream, File outputFile, OUTPUT_FORMAT outputFormat,
Map<PARAM, Object> params) throws EngineException, SemanticException {
// Open a report design
IReportRunnable design = engine.openReportDesign(reportDesignInStream);
// Use data-source properties from persistence.xml
forceDataSource(design);
// Create RunAndRender task
IRunAndRenderTask runTask = engine.createRunAndRenderTask(design);
// Use data-source from JPA persistence context
// forceDataSourceConnection(runTask);
// Define report parameters
defineReportParameters(runTask, params);
// Set render options
runTask.setRenderOption(getRenderOptions(outputFile, outputFormat, params));
// Execute task
runTask.run();
}
private void forceDataSource(IReportRunnable runableReport) throws SemanticException {
DesignElementHandle designHandle = runableReport.getDesignHandle();
Map<String, String> persistenceProperties = PersistenceUtils.getPersistenceProperties();
String dsURL = persistenceProperties.get(AvailableSettings.JDBC_URL);
String dsDatabase = StringUtils.substringAfterLast(dsURL, "/");
String dsUser = persistenceProperties.get(AvailableSettings.JDBC_USER);
String dsPass = persistenceProperties.get(AvailableSettings.JDBC_PASSWORD);
String dsDriver = persistenceProperties.get(AvailableSettings.JDBC_DRIVER);
SlotHandle dataSources = ((ReportDesignHandle) designHandle).getDataSources();
int count = dataSources.getCount();
for (int i = 0; i < count; i++) {
DesignElementHandle dsHandle = dataSources.get(i);
if (dsHandle != null && dsHandle instanceof OdaDataSourceHandle) {
// replace connection properties from persistence.xml
dsHandle.setProperty("databaseName", dsDatabase);
dsHandle.setProperty("username", dsUser);
dsHandle.setProperty("password", dsPass);
dsHandle.setProperty("URL", dsURL);
dsHandle.setProperty("driverClass", dsDriver);
dsHandle.setProperty("jarList", getSqlDriverClassJarPath());
// #SuppressWarnings("unchecked")
// List<ExtendedProperty> privateProperties = (List<ExtendedProperty>) dsHandle
// .getProperty("privateDriverProperties");
// for (ExtendedProperty extProp : privateProperties) {
// if ("odaUser".equals(extProp.getName())) {
// extProp.setValue(dsUser);
// }
// }
}
}
}
I was having the same issue
Changing the Data Source type from "JDBC Database Connection for Query Builder" to "JDBC Data Source" solved the problem for me.
I can't see on the wiki where checking out is documented. Ideally, I would like to check out a file "example/folder/file.xml", if not just the folder... and then when the application closes down or otherwise, be able to commit back in changes to this file. How do I do this?
As SVNKit developer, I would recommend you to prefer new API based on SvnOperationFactory. The old API (based on SVNClientManager) will be operational still but all new SVN features will come only to the new API.
final SvnOperationFactory svnOperationFactory = new SvnOperationFactory();
try {
final SvnCheckout checkout = svnOperationFactory.createCheckout();
checkout.setSingleTarget(SvnTarget.fromFile(workingCopyDirectory));
checkout.setSource(SvnTarget.fromURL(url));
//... other options
checkout.run();
} finally {
svnOperationFactory.dispose();
}
You cannot check out a file in Subversion. You have to check out a folder.
To check out a folder with one or more files:
SVNClientManager ourClientManager = SVNClientManager.newInstance(null,
repository.getAuthenticationManager());
SVNUpdateClient updateClient = ourClientManager.getUpdateClient();
updateClient.setIgnoreExternals(false);
updateClient.doCheckout(url, destPath, revision, revision,
isRecursive);
To commit a previously checked out folder:
SVNClientManager ourClientManager = SVNClientManager.newInstance(null,
repository.getAuthenticationManager());
ourClientManager.getWCClient().doInfo(wcPath, SVNRevision.HEAD);
ourClientManager.getCommitClient().doCommit
(new File[] { wcPath }, keepLocks, commitMessage, false, true);
I also used the code snippet proposed by Dmitry Pavlenko and I had no problems.
But it took nearly 30 minutes to checkout or update a repo struture of 35 MB.
It's not useable in my usecase (simply checking out a directory structure as part of the content/documents/media of a web application).
Or have I made some errors?
final ISVNAuthenticationManager authManager = SVNWCUtil.createDefaultAuthenticationManager(name, password);
final SVNURL svnUrl = SVNURL.create(url.getProtocol(), name, url.getHost(), 443, url.getPath(), true);
SVNRepository svnRepo= SVNRepositoryFactory.create(svnUrl);
svnRepo.setAuthenticationManager(authManager);
svnOperationFactory.setAuthenticationManager(authManager);
SVNDirEntry entry = svnRepo.info(".", -1);
long remoteRevision = entry.getRevision();
if (!workingCopyDirectory.exists()) {
workingCopyDirectory.mkdirs();
}
final SvnCheckout checkout = svnOperationFactory.createCheckout();
checkout.setSource(SvnTarget.fromURL(svnUrl));
checkout.setSingleTarget(SvnTarget.fromFile(workingCopyDirectory));
remoteRevision = checkout.run();