How to check in a file into TFS using Java SDK - java

I am planning to integrate the TFS with another application using websevice.
I am new to TFS.so I downloaded the TFS Java SDK 2010.I have been writing s sample program to checkin file into TFS. but not successful. On internet also not much helpful post for Java side SDK samples.
Below is the code I have written:-
public static void main(String[] args) {
// TODO Auto-generated method stub
TFSTeamProjectCollection tpc = SnippetSettings.connectToTFS(); //got the connection to TFS
VersionControlClient vcc = tpc.getVersionControlClient();
//WorkspaceInfo wi = Workstation.Current.GetLocalWorkspaceInfo(Environment.CurrentDirectory);
//vcc.get
String[] paths =new String[1];
paths[0]="D:\\Tools\testfile.txt"; //wants to checkin this local file
Workspace ws = vcc.createWorkspace(null,"Testworkspacename3", null, "","Testcomment",null, null); // this is workspace created at path local C:\ProgramData\Microsoft Team Foundation Local Workspaces
int item = ws.pendAdd(paths, true, null, LockLevel.NONE, GetOptions.GET_ALL, PendChangesOptions.GET_LATEST_ON_CHECKOUT); // this line gives me 0 count. so this is problematic . 0 means nothing is being added.
PendingSet pd = ws.getPendingChanges();
PendingChange[] pendingChanges = pd.getPendingChanges();
ws.checkIn(pendingChanges, "samashti comment");
Project project = tpc.getWorkItemClient().getProjects().get(SnippetSettings.PROJECT_NAME);
System.out.println();
Please help here...what is the wrong here. Can some one provide me correct working sample for new file checkin and existing file checkin using JAVA.

Just refer these steps below:
Connect to team project collection
Get version control client
Create a new workspace
Add file to workspace
Get pending changes
Check in pending changes
Below are some links about TFS SDK for JAVA for your reference:
https://github.com/gocd/gocd/blob/master/tfs-impl/src/com/thoughtworks/go/tfssdk/TfsSDKCommand.java
https://github.com/jenkinsci/tfs-plugin/blob/master/src/main/java/hudson/plugins/tfs/commands/NewWorkspaceCommand.java

Please see the code snippet for creating and mapping workspace as per TFS-SDK-14.0.3
public static Workspace createAndMapWorkspace(final TFSTeamProjectCollection tpc) {
final String workspaceName = "SampleVCWorkspace" + System.currentTimeMillis(); //$NON-NLS-1$
Workspace workspace = null;
// Get the workspace
workspace = tpc.getVersionControlClient().tryGetWorkspace(ConsoleSettings.MAPPING_LOCAL_PATH);
// Create and map the workspace if it does not exist
if (workspace == null) {
workspace = tpc.getVersionControlClient().createWorkspace(
null,
workspaceName,
"Sample workspace comment", //$NON-NLS-1$
WorkspaceLocation.SERVER,
null,
WorkspacePermissionProfile.getPrivateProfile());
// Map the workspace
final WorkingFolder workingFolder = new WorkingFolder(
ConsoleSettings.MAPPING_SERVER_PATH,
LocalPath.canonicalize(ConsoleSettings.MAPPING_LOCAL_PATH));
workspace.createWorkingFolder(workingFolder);
}
System.out.println("Workspace '" + workspaceName + "' now exists and is mapped"); //$NON-NLS-1$ //$NON-NLS-2$
return workspace;
}

Related

Listing public folders

I'm writing a program for importing contacts from an ERP system to Outlook. Different emails will receive different lists of contacts from ERP. The idea here is, in each email I have a public contact folder that can be accessed by a technical user. The technical user can write contacts into this folder. Here is the code for searching the folder:
protected FolderId findFolderId(String folderDisplayName, String userEmail) throws Exception {
Mailbox userMailbox = new Mailbox(userEmail);
FolderId contactRootFolder = new FolderId(WellKnownFolderName.Root, userMailbox);
FolderId result = null;
FolderView view = new FolderView(Integer.MAX_VALUE);
view.setPropertySet(new PropertySet(BasePropertySet.IdOnly, FolderSchema.DisplayName));
view.setTraversal(FolderTraversal.Deep);
FindFoldersResults findFolderResults = this.service.findFolders(contactRootFolder, view);
//find specific folder
for (Folder f : findFolderResults) {
if (folderDisplayName.equals(f.getDisplayName())) {
result = f.getId();
}
}
return result;
}
The service object is created as follows:
this.service = new ExchangeService();
ExchangeCredentials credentials = new WebCredentials(userName, passWord);
this.service.setCredentials(credentials);
try {
this.service.setUrl(new URI(URL));
} catch (URISyntaxException e) {
LOGGER.error(e);
}
Where URL is the end point for the Exchange server (for Office 365 it is https://outlook.office365.com/EWS/Exchange.asmx).
The code works with Office 2010, I get the Id from that folder, connect to it and save the contacts. After the migration to Office 365, we can't find the public folder. It can just find a folder with the name "PeoplePublicData". (I don't even know that folder exists.)
Throttling in Office365 means your code will only return the first 1000 folder in the Mailbox so if what your looking for isn't within that result set that would be one reason. I would suggest you get rid of
FolderView view = new FolderView(Integer.MAX_VALUE);
and change it to
FolderView view = new FolderView(1000);
and then page the results https://msdn.microsoft.com/en-us/library/office/dn592093(v=exchg.150).aspx which will allow you to get all the Folder in a Mailbox. Also unless you are looking for something in the Non_IPM_Subtree of the Mailbox start the search with MsgFolderRoot eg
FolderId contactRootFolder = new FolderId(WellKnownFolderName.MsgFolderRoot, userMailbox);
That will reduce the number of folders returned.
Also why don't you use a SearchFilter to search for the folder you are after eg https://msdn.microsoft.com/en-us/library/office/dd633627(v=exchg.80).aspx this would eliminate the need to page the results,

Using SQLite in Codename One Application

I have a working sqlite db which I have place in my /src folder.
I then went onto the Codename One website and followed their doc example
Database db = null;
Cursor cur = null;
try {
db = Display.getInstance().openOrCreate("MyDb.db");
if(query.getText().startsWith("select")) {
cur = db.executeQuery(query.getText());
int columns = cur.getColumnCount();
frmMain.removeAll();
if(columns > 0) {
boolean next = cur.next();
if(next) {
ArrayList<String[]> data = new ArrayList<>();
String[] columnNames = new String[columns];
for(int iter = 0 ; iter < columns ; iter++) {
columnNames[iter] = cur.getColumnName(iter);
}
while(next) {
Row currentRow = cur.getRow();
String[] currentRowArray = new String[columns];
for(int iter = 0 ; iter < columns ; iter++) {
currentRowArray[iter] = currentRow.getString(iter);
}
data.add(currentRowArray);
next = cur.next();
}
Object[][] arr = new Object[data.size()][];
data.toArray(arr);
frmMain.add(BorderLayout.CENTER, new Table(new DefaultTableModel(columnNames, arr)));
} else {
frmMain.add(BorderLayout.CENTER, "Query returned no results");
}
} else {
frmMain.add(BorderLayout.CENTER, "Query returned no results");
}
} else {
db.execute(query.getText());
frmMain.add(BorderLayout.CENTER, "Query completed successfully");
}
frmMain.revalidate();
} catch(IOException err) {
frmMain.removeAll();
frmMain.add(BorderLayout.CENTER, "Error: " + err);
frmMain.revalidate();
} finally {
Util.cleanup(db);
Util.cleanup(cur);
}
However when I run the example and try and execute a simple select query I get this error ...
java.sql.SQLException: [SQLITE_ERROR] SQL error or missing database (no such table: MyTable)
So I have added the DB
I have used the 'openOrCreate' statement
Have I missed a step?
Thanks
Are you shure that you current working directory at execution is ./src ?
Try
db = Display.getInstance().openOrCreate("./src/MyDb.db");
or open with absolute filename:
db = Display.getInstance().openOrCreate("/path/to/src/MyDb.db");
You can try this cn1lib https://github.com/shannah/cn1-data-access-lib
I have used it and it works a charm, except doesn't work for 2 tables in the same query and can't perform delete operations.
Cheers
Thanks for all the input guys.
Unfortunately none of the advice worked for me.
However I did solve it in the end.
It turns out that there is a folder in my home directory called '.cn1/database'. Once I placed the DB into this folder it worked.
Two things:
1] If the db does not exist then it will create it and place it into this directory
2] The db does not show up anywhere in Netbeans (well not that I could see anyway)
Thanks again
From the developer guide:
Some SQLite apps ship with a "ready made" database. We allow you to
replace the DB file by using the code:
String path = Display.getInstance().getDatabasePath(“databaseName”);
You can then use the FileSystemStorage class to write the content of
your DB file into the path. Notice that it must be a valid SQLite
file!
Important: getDatabasePath() is not supported in the Javascript port. It
will always return null.
This is very useful for applications that need to synchronize with a
central server or applications that ship with a large database as part
of their core product.
You are relying on paths that make sense in the simulator, in the device you need to copy a resource into location. Check out the SQL demo where this is implemented: https://www.codenameone.com/blog/sql-demo-revisited.html
Use following code to copy database to Codenameone storage after put the database file in src folder, the database will copy to directory ".cn1/database" after running
String DB_NAME = "DBNAME.db";
Database temp_db = Database.openOrCreate(DB_NAME); //To create an empty file before copy, otherwise Android will throw file not found exception
temp_db.close();
String p = Database.getDatabasePath(DB_NAME);
OutputStream o = FileSystemStorage.getInstance().openOutputStream(p);
InputStream i = Display.getInstance().getResourceAsStream(getClass(), "/" + DB_NAME);
Util.copy(i, o);
This doesn't seem like a complete answer.
I'm going through the demo, and something seems to be missing:
Shouldn't a build task copy this db resource to the correct target folder? Otherwise how can a deployment ever work? (if the db doesnt get packaged up then how will it get deployed?) without it the app cant run

netbeans makefile java execution

Here I have attached the source code and make file of it.
I use netbeans. How should I build my project to execute this java code in netbeans. please help me with detailed steps. I am new to netbeans and java.
I use netbeans 8.0.2 for windows 10 64 bit OS.
Source code:
package net.sourceforge.jpcap.tutorial.example15;
import net.sourceforge.jpcap.capture.*;
import net.sourceforge.jpcap.net.*;
/*
* This example utilizes the endCapture() feature.
*/
public class Example15 {
private static final int INFINITE = -1;
private static final int PACKET_COUNT = INFINITE;
// BPF filter for capturing any packet
private static final String FILTER = "";
private PacketCapture m_pcap;
private String m_device;
public Example15() throws Exception {
// Step 1: Instantiate Capturing Engine
m_pcap = new PacketCapture();
// Step 2: Check for devices
m_device = m_pcap.findDevice();
// Step 3: Open Device for Capturing (requires root)
m_pcap.open(m_device, true);
// Step 4: Add a BPF Filter (see tcpdump documentation)
m_pcap.setFilter(FILTER, true);
// Step 5: Register a Listener for Raw Packets
m_pcap.addRawPacketListener(new RawPacketHandler(m_pcap));
// Step 6: Capture Data (max. PACKET_COUNT packets)
m_pcap.capture(PACKET_COUNT);
}
public static void main(String[] args) {
try {
Example15 example = new Example15();
} catch(Exception e) {
e.printStackTrace();
System.exit(1);
}
}
}
class RawPacketHandler implements RawPacketListener
{
private static int m_counter = 0;
private PacketCapture m_pcap = null;
public RawPacketHandler(PacketCapture pcap) {
m_counter = 0;
m_pcap = pcap;
}
public synchronized void rawPacketArrived(RawPacket data) {
m_counter++;
System.out.println("Packet " + m_counter + "\n" + data + "\n");
if(condition())
m_pcap.endCapture();
}
private boolean condition() {
return (m_counter == 5) ? true : false;
}
}
make file:
# $Id: makefile,v 1.1 2002/07/10 23:05:26 pcharles Exp $
#
# package net.sourceforge.jpcap.tutorial.example15
#
PKG = net.sourceforge.jpcap.tutorial.example15
PKG_DIR = $(subst .,/, $(PKG))
REL = ../../../../..
include ${MAKE_HOME}/os.makefile
include ${MAKE_HOME}/rules.makefile
JAVA = \
Example15
JAVA_SOURCE = $(addsuffix .java, $(JAVA))
JAVA_CLASSES = $(addsuffix .class, $(JAVA))
all: $(JAVA_CLASSES)
include ${MAKE_HOME}/targets.makefile
include ${MAKE_HOME}/depend.makefile
Netbeans uses Makefiles for C++ code but not for Java code. It is easy to get this code to build but there is no need for the Makefile.
File -> New Project
Select Category Java on the left and "Java Application with Existing Sources" (with this option the project and sources will be in different directories) on the right.
Click Next
Change the Project name and/or directory to create the project in.
Add the original source directory in the dialog.
Click Finish to Create the project.
Within netbeans you can now use the Run-> Build Project to build it.
If you really have to have a Makefile just make one that just runs the Netbeans project( which is actually an ant project).
eg.
build:
ant jar

BIRT Error : Unable to determine the default workspace location in Java

I get the following error
java.lang.IllegalStateException: Unable to determine the default workspace location. Check your OSGi-less platform configuration of the plugin or datatools workspace path.
which makes little sense.
Reports are created using the BIRT designer within Eclipse, and we are using code to covert the reports in to PDF.
the code looks something like
final EngineConfig config = new EngineConfig();
config.setBIRTHome("./birt");
Platform.startup(config);
final IReportEngineFactory factory = (IReportEngineFactory) Platform
.createFactoryObject(IReportEngineFactory.EXTENSION_REPORT_ENGINE_FACTORY);
final HTMLRenderOption ho = new HTMLRenderOption();
ho.setImageHandler(new HTMLCompleteImageHandler());
config.setEmitterConfiguration(RenderOption.OUTPUT_FORMAT_HTML, ho);
// Create the engine.
this.engine = factory.createReportEngine(config);
final IReportRunnable report = this.engine.openReportDesign(reportName);
final IRunAndRenderTask task = this.engine.createRunAndRenderTask(report);
final RenderOption options = new HMTLRenderOption();
options.setOutputFormat(HTMLRenderOption.OUTPUT_FORMAT_PDF);
options.setOutputFormat("pdf");
final String output = reportName.replaceFirst(".rptdesign", ".xls");
final String output = name.replaceFirst(".rptdesign", "." + HTMLRenderOption.OUTPUT_FORMAT_PDF);
options.setOutputFileName( outputReporttName);
task.setRenderOption(options);
// Run the report.
task.run();
but it seems during the task.run() method, the system throws the error.
This needs to be able to run standalone, without the need of eclipse, and hopped thatt he setting of BIRT home would make it happy, but these seems to be some other connection profile i am unaware of and probably don't need.
The full error :
07-Jan-2013 14:55:31 org.eclipse.datatools.connectivity.internal.ConnectivityPlugin log
SEVERE: Unable to determine the default workspace location. Check your OSGi-less platform configuration of the plugin or datatools workspace path.
07-Jan-2013 14:55:31 org.eclipse.birt.report.engine.api.impl.EngineTask handleFatalExceptions
SEVERE: An error happened while running the report. Cause:
java.lang.IllegalStateException: Unable to determine the default workspace location. Check your OSGi-less platform configuration of the plugin or datatools workspace path.
at org.eclipse.datatools.connectivity.internal.ConnectivityPlugin.getDefaultStateLocation(ConnectivityPlugin.java:155)
at org.eclipse.datatools.connectivity.internal.ConnectivityPlugin.getStorageLocation(ConnectivityPlugin.java:191)
at org.eclipse.datatools.connectivity.internal.ConnectionProfileMgmt.getStorageLocation(ConnectionProfileMgmt.java:1060)
at org.eclipse.datatools.connectivity.oda.profile.internal.OdaProfileFactory.defaultProfileStoreFile(OdaProfileFactory.java:170)
at org.eclipse.datatools.connectivity.oda.profile.OdaProfileExplorer.defaultProfileStoreFile(OdaProfileExplorer.java:138)
at org.eclipse.datatools.connectivity.oda.profile.OdaProfileExplorer.loadProfiles(OdaProfileExplorer.java:292)
at org.eclipse.datatools.connectivity.oda.profile.OdaProfileExplorer.getProfileByName(OdaProfileExplorer.java:537)
at org.eclipse.datatools.connectivity.oda.profile.provider.ProfilePropertyProviderImpl.getConnectionProfileImpl(ProfilePropertyProviderImpl.java:184)
at org.eclipse.datatools.connectivity.oda.profile.provider.ProfilePropertyProviderImpl.getDataSourceProperties(ProfilePropertyProviderImpl.java:64)
at org.eclipse.datatools.connectivity.oda.consumer.helper.ConnectionPropertyHandler.getEffectiveProperties(ConnectionPropertyHandler.java:123)
at org.eclipse.datatools.connectivity.oda.consumer.helper.OdaConnection.getEffectiveProperties(OdaConnection.java:826)
at org.eclipse.datatools.connectivity.oda.consumer.helper.OdaConnection.open(OdaConnection.java:240)
at org.eclipse.birt.data.engine.odaconsumer.ConnectionManager.openConnection(ConnectionManager.java:165)
at org.eclipse.birt.data.engine.executor.DataSource.newConnection(DataSource.java:224)
at org.eclipse.birt.data.engine.executor.DataSource.open(DataSource.java:212)
at org.eclipse.birt.data.engine.impl.DataSourceRuntime.openOdiDataSource(DataSourceRuntime.java:217)
at org.eclipse.birt.data.engine.impl.QueryExecutor.openDataSource(QueryExecutor.java:407)
at org.eclipse.birt.data.engine.impl.QueryExecutor.prepareExecution(QueryExecutor.java:317)
at org.eclipse.birt.data.engine.impl.PreparedQuery.doPrepare(PreparedQuery.java:455)
at org.eclipse.birt.data.engine.impl.PreparedDataSourceQuery.produceQueryResults(PreparedDataSourceQuery.java:190)
at org.eclipse.birt.data.engine.impl.PreparedDataSourceQuery.execute(PreparedDataSourceQuery.java:178)
at org.eclipse.birt.data.engine.impl.PreparedOdaDSQuery.execute(PreparedOdaDSQuery.java:145)
at org.eclipse.birt.report.data.adapter.impl.DataRequestSessionImpl.execute(DataRequestSessionImpl.java:624)
at org.eclipse.birt.report.engine.data.dte.DteDataEngine.doExecuteQuery(DteDataEngine.java:152)
at org.eclipse.birt.report.engine.data.dte.AbstractDataEngine.execute(AbstractDataEngine.java:267)
at org.eclipse.birt.report.engine.executor.ExecutionContext.executeQuery(ExecutionContext.java:1939)
at org.eclipse.birt.report.engine.executor.QueryItemExecutor.executeQuery(QueryItemExecutor.java:80)
at org.eclipse.birt.report.engine.executor.TableItemExecutor.execute(TableItemExecutor.java:62)
at org.eclipse.birt.report.engine.internal.executor.dup.SuppressDuplicateItemExecutor.execute(SuppressDuplicateItemExecutor.java:43)
at org.eclipse.birt.report.engine.internal.executor.wrap.WrappedReportItemExecutor.execute(WrappedReportItemExecutor.java:46)
at org.eclipse.birt.report.engine.internal.executor.l18n.LocalizedReportItemExecutor.execute(LocalizedReportItemExecutor.java:34)
at org.eclipse.birt.report.engine.layout.html.HTMLBlockStackingLM.layoutNodes(HTMLBlockStackingLM.java:65)
at org.eclipse.birt.report.engine.layout.html.HTMLPageLM.layout(HTMLPageLM.java:92)
at org.eclipse.birt.report.engine.layout.html.HTMLReportLayoutEngine.layout(HTMLReportLayoutEngine.java:100)
at org.eclipse.birt.report.engine.api.impl.RunAndRenderTask.doRun(RunAndRenderTask.java:180)
at org.eclipse.birt.report.engine.api.impl.RunAndRenderTask.run (RunAndRenderTask.java:77)
has anyone seen this error and can point me in the right direction ?
When I had this issue then I tried two things. The first thing solved the error but then I just got to the next error.
The first thing I tried was setting the setenv.sh file to have the following line:
export CATALINA_OPTS="$CATALINA_OPTS -Djava.io.tmpdir=/opt/local/share/tomcat/apache-tomcat-8.0.8/temp/tmpdir -Dorg.eclipse.datatools_workspacepath=/opt/local/share/tomcat/apache-tomcat-8.0.8/temp/tmpdir/workspace_dtp"
This solution worked after I made the tmpdir and the workspace_dtp directories in my local tomcat server. This was done in response to the guidance here.
However, I just got to the next error, which was a connection profile error. I can look into it again if you need. I know how to replicate the issue.
The second thing I tried ended up solving the issue completely and had to do with our report designer selecting the wrong type of datasource in the report design process. See my post on the Eclipse BIRT forums here for the full story: post.
Basically, the report type was set to "JDBC Database Connection for Query Builder" when it should have been set to "JDBC Data Source." See the picture for reference:
Here I give you a tip that save me from that pain :
just launch eclipse with "-clean" option after installing BIRT plugins.
To be clear, my project was built from BIRT maven dependencies, and so should not use eclipse dependencies to run (except for designing reports), but ... i think there was a conflict somewhere ... especially with org.eclipse.datatools.connectivity_1.2.4.v201202041105.jar
For global understanding, you should follow the migration guide :
http://wiki.eclipse.org/Birt_3.7_Migration_Guide#Connection_Profiles
It helps using a connection profile to externalize datasource parameters.
So it's not required if you define JDBC parameters directly in report design.
I used this programmatic way to initialize worskpace directory :
#Override
public void initializeEngine() throws BirtException {
// define eclipse datatools workspace path (required)
String workspacePath = setDataToolsWorkspacePath();
// set configuration
final EngineConfig config = new EngineConfig();
config.setLogConfig(workspacePath, Level.WARNING);
// config.setResourcePath(getSqlDriverClassJarPath());
// startup OSGi framework
Platform.startup(config); // really needed ?
IReportEngineFactory factory = (IReportEngineFactory) Platform
.createFactoryObject(IReportEngineFactory.EXTENSION_REPORT_ENGINE_FACTORY);
engine = factory.createReportEngine(config);
engine.changeLogLevel(Level.WARNING);
}
private String setDataToolsWorkspacePath() {
String workspacePath = System.getProperty(DATATOOLS_WORKSPACE_PATH);
if (workspacePath == null) {
workspacePath = FilenameUtils.concat(SystemUtils.getJavaIoTmpDir().getAbsolutePath(), "workspace_dtp");
File workspaceDir = new File(workspacePath);
if (!workspaceDir.exists()) {
workspaceDir.mkdir();
}
if (!workspaceDir.canWrite()) {
workspaceDir.setWritable(true);
}
System.setProperty(DATATOOLS_WORKSPACE_PATH, workspacePath);
}
return workspacePath;
}
I also needed to force datasource parameters at runtime this way :
private void generateReportOutput(InputStream reportDesignInStream, File outputFile, OUTPUT_FORMAT outputFormat,
Map<PARAM, Object> params) throws EngineException, SemanticException {
// Open a report design
IReportRunnable design = engine.openReportDesign(reportDesignInStream);
// Use data-source properties from persistence.xml
forceDataSource(design);
// Create RunAndRender task
IRunAndRenderTask runTask = engine.createRunAndRenderTask(design);
// Use data-source from JPA persistence context
// forceDataSourceConnection(runTask);
// Define report parameters
defineReportParameters(runTask, params);
// Set render options
runTask.setRenderOption(getRenderOptions(outputFile, outputFormat, params));
// Execute task
runTask.run();
}
private void forceDataSource(IReportRunnable runableReport) throws SemanticException {
DesignElementHandle designHandle = runableReport.getDesignHandle();
Map<String, String> persistenceProperties = PersistenceUtils.getPersistenceProperties();
String dsURL = persistenceProperties.get(AvailableSettings.JDBC_URL);
String dsDatabase = StringUtils.substringAfterLast(dsURL, "/");
String dsUser = persistenceProperties.get(AvailableSettings.JDBC_USER);
String dsPass = persistenceProperties.get(AvailableSettings.JDBC_PASSWORD);
String dsDriver = persistenceProperties.get(AvailableSettings.JDBC_DRIVER);
SlotHandle dataSources = ((ReportDesignHandle) designHandle).getDataSources();
int count = dataSources.getCount();
for (int i = 0; i < count; i++) {
DesignElementHandle dsHandle = dataSources.get(i);
if (dsHandle != null && dsHandle instanceof OdaDataSourceHandle) {
// replace connection properties from persistence.xml
dsHandle.setProperty("databaseName", dsDatabase);
dsHandle.setProperty("username", dsUser);
dsHandle.setProperty("password", dsPass);
dsHandle.setProperty("URL", dsURL);
dsHandle.setProperty("driverClass", dsDriver);
dsHandle.setProperty("jarList", getSqlDriverClassJarPath());
// #SuppressWarnings("unchecked")
// List<ExtendedProperty> privateProperties = (List<ExtendedProperty>) dsHandle
// .getProperty("privateDriverProperties");
// for (ExtendedProperty extProp : privateProperties) {
// if ("odaUser".equals(extProp.getName())) {
// extProp.setValue(dsUser);
// }
// }
}
}
}
I was having the same issue
Changing the Data Source type from "JDBC Database Connection for Query Builder" to "JDBC Data Source" solved the problem for me.

Checking Out Directory / File with SVNKit

I can't see on the wiki where checking out is documented. Ideally, I would like to check out a file "example/folder/file.xml", if not just the folder... and then when the application closes down or otherwise, be able to commit back in changes to this file. How do I do this?
As SVNKit developer, I would recommend you to prefer new API based on SvnOperationFactory. The old API (based on SVNClientManager) will be operational still but all new SVN features will come only to the new API.
final SvnOperationFactory svnOperationFactory = new SvnOperationFactory();
try {
final SvnCheckout checkout = svnOperationFactory.createCheckout();
checkout.setSingleTarget(SvnTarget.fromFile(workingCopyDirectory));
checkout.setSource(SvnTarget.fromURL(url));
//... other options
checkout.run();
} finally {
svnOperationFactory.dispose();
}
You cannot check out a file in Subversion. You have to check out a folder.
To check out a folder with one or more files:
SVNClientManager ourClientManager = SVNClientManager.newInstance(null,
repository.getAuthenticationManager());
SVNUpdateClient updateClient = ourClientManager.getUpdateClient();
updateClient.setIgnoreExternals(false);
updateClient.doCheckout(url, destPath, revision, revision,
isRecursive);
To commit a previously checked out folder:
SVNClientManager ourClientManager = SVNClientManager.newInstance(null,
repository.getAuthenticationManager());
ourClientManager.getWCClient().doInfo(wcPath, SVNRevision.HEAD);
ourClientManager.getCommitClient().doCommit
(new File[] { wcPath }, keepLocks, commitMessage, false, true);
I also used the code snippet proposed by Dmitry Pavlenko and I had no problems.
But it took nearly 30 minutes to checkout or update a repo struture of 35 MB.
It's not useable in my usecase (simply checking out a directory structure as part of the content/documents/media of a web application).
Or have I made some errors?
final ISVNAuthenticationManager authManager = SVNWCUtil.createDefaultAuthenticationManager(name, password);
final SVNURL svnUrl = SVNURL.create(url.getProtocol(), name, url.getHost(), 443, url.getPath(), true);
SVNRepository svnRepo= SVNRepositoryFactory.create(svnUrl);
svnRepo.setAuthenticationManager(authManager);
svnOperationFactory.setAuthenticationManager(authManager);
SVNDirEntry entry = svnRepo.info(".", -1);
long remoteRevision = entry.getRevision();
if (!workingCopyDirectory.exists()) {
workingCopyDirectory.mkdirs();
}
final SvnCheckout checkout = svnOperationFactory.createCheckout();
checkout.setSource(SvnTarget.fromURL(svnUrl));
checkout.setSingleTarget(SvnTarget.fromFile(workingCopyDirectory));
remoteRevision = checkout.run();

Categories

Resources