Is there a way to create an AEM package via a java code ?
We need to package some content every night via a service run by a cron job.
I checked online and it seems to be possible using a curl command. But either way, I'd need this done via a daily service running a java code.
Please refer to some of the links given below :
1)https://helpx.adobe.com/experience-manager/using/dynamic_aem_packages.html
2)http://cq5experiences.blogspot.in/2014/01/creating-packages-using-java-code-in-cq5.html
The main code goes something like this :
final JcrPackage jcrPackage = getPackageHelper().createPackageFromPathFilterSets(packageResources,
request.getResourceResolver().adaptTo(Session.class),
properties.get(PACKAGE_GROUP_NAME, getDefaultPackageGroupName()),
properties.get(PACKAGE_NAME, getDefaultPackageName()),
properties.get(PACKAGE_VERSION, DEFAULT_PACKAGE_VERSION),
PackageHelper.ConflictResolution.valueOf(properties.get(CONFLICT_RESOLUTION,
PackageHelper.ConflictResolution.IncrementVersion.toString())),
packageDefinitionProperties
);
So first of all you can create a scheduler and in the scheduler's run method you can write the logic to package the required filter paths .
Hoping this is helpful for you.
Related
I'm trying to connect to an existing Azure Event Hub feed using Java. For my first steps, I'm adjusting the Event Hub Samples project, specifically the EventProcessorSample.
However, it depends on you having an Azure Storage set up which will be used for the ILeaseManager and ICheckpointManager; since I don't have one, I've been looking around and found the InMemoryLeaseManager and InMemoryCheckpointManager classes I'd like to use for my first steps.
However, the protocol for those is that they are first created, then passed to the builder to create a EventProcessorHost, and after that you need to call initialize with the created hostsHostContext`. Here's how I do that:
InMemoryCheckpointManager checkpointManager = new InMemoryCheckpointManager();
InMemoryLeaseManager leaseManager = new InMemoryLeaseManager();
EventProcessorHost processorHost = EventProcessorHost.EventProcessorHostBuilder
.newBuilder(EventProcessorHost.createHostName(hostNamePrefix), consumerGroupName)
.useUserCheckpointAndLeaseManagers(checkpointManager, leaseManager)
.useEventHubConnectionString(eventHubConnectionString.toString(), eventHubName)
.build();
checkpointManager.initialize(processorHost.getHostContext());
leaseManager.initialize(processorHost.getHostContext());
However, EventProcessorHost#getHostContext() is package visible, so the only way to get the above to compile is to put it in a class with package com.microsoft.azure.eventprocessorhost. This will compile but not run, because the original Event Hub package is signed, so running this causes a
Exception in thread "main" java.lang.SecurityException: class "com.microsoft.azure.eventprocessorhost.ILeaseManager"'s signer information does not match signer information of other classes in the same package
So I really have to wonder how you are to supposed to use those utility classes at all.
Of course I can a) implement the interfaces myself or b) create an unsigned Event Hub package, but both don't seem to be what's intended.
Am I missing something?
I have quick and relatively easy question I think, but I don't get it so here I am.
So, I've got something like this:
file.upload = Upload.upload({
url: 'sendemail',
data: {file: file}
});
Whatever about rest of the code. I want to know for what is that url: section. It's for my java spring #RequestMapping("/sendemail")? Or it is for folder on my server to store the file?
Please answer me, I just want to know it :<
So when you are using Java Spring. It provides you a lots of cool annotations.
One of them is
#RequestMapping()
This annotation helps for routing your services. So when you write RequestMapping("/sendemail"), it looks for the end point sendemail and does the job accordingly.
Now to your question,
So {url: 'sendemail'} specifies that the url should end with /sendemail so as to do the mentioned job.
I think the header explains it all: Is there a nice way to create and load offline snapshots of database schema using SchemaCrawler without using command line? If yes, can you provide some example code / link please? If not, some example java code to use command line options would be helpful too (I don't have much experience with that)!
Thanks for any kind of help!
PS: I managed to create offline snapshot with this code:
final SchemaCrawlerOptions options = new SchemaCrawlerOptions();
// Set what details are required in the schema - this affects the
// time taken to crawl the schema
options.setSchemaInfoLevel(SchemaInfoLevelBuilder.standard());
options.setRoutineInclusionRule(new ExcludeAll());
options.setSchemaInclusionRule(new RegularExpressionInclusionRule(/* some regex here*/));
options.setTableInclusionRule(new RegularExpressionExclusionRule(/*some regex here*/));
outputOptions.setCompressedOutputFile(Paths.get("./test_db_snapshot.xml"));
final String command = "serialize";
final Executable executable = new SchemaCrawlerExecutable(command);
executable.setSchemaCrawlerOptions(options);
executable.setOutputOptions(outputOptions);
executable.execute(getConnection());
Not sure how to connect to it though.
You need to use the schemacrawler.tools.offline.OfflineSnapshotExecutable along with an schemacrawler.tools.offline.jdbc.OfflineConnection to "connect" to your database snapshoot.
Please take a look at the following code:
OfflineSnapshotTest.offlineSnapshotExecutable()
And #ZidaneT, to load an offline snapshot, use code like that in LoadSnapshotTest.java
Sualeh Fatehi, SchemaCrawler
I am trying to access the list of all jobs/projects in jenkins and their project files in java not groovy and parsing XML files.
I suggest you to use other ways to do this rather then use Java. Consider to use Ruby or Python API wrappers, Groovy, CLI API, Script Console etc. Refer also to Remote Access API for more information.
But if you still need Java, well, there is no Java API, but there is Rest API. And you may use some Java's http client to communicate, for example. Here are required steps:
1. Get a list of jobs.
This can be done requesting http://jenkins_url:port/api/json?tree=jobs[name,url].
Response example:
{
"jobs" : [
{
"name" : "JOB_NAME1",
"url" : "http://jenkins_url:port/job/JOB_NAME1/"
},
{
"name" : "JOB_NAME2",
"url" : "http://jenkins_url:port/job/JOB_NAME2/"
},
...
}
From there you can retrieve job names and urls.
2. Get build artifacts.
Having job url, download from job_url/lastSuccessfulBuild/artifact/*zip*/archive.zip
3. Or get workspace files.
Having job url, download from job_url/JOB_NAME1/ws/*zip*/workspace.zip
Beware, some of this operations require proper Jenkins credentials, anonymous access. Otherwise, request will fail.
More detailed information about Rest API available at your Jenkins: http://jenkins_url:port/api/
Like #Vitalii said its better to do in groovy or some other scripting languages or to parse the api/xml file to get the workspace job list.
For your case you can get by making your class extending the Trigger and using the job object of the class trigger.
Note: include all the other default classes the jenkins plugin requires and make sure that the plugin runs every minute for this code to execute properly.
public class xyz extends Trigger<BuildableItem>
{
#Override
public void run()
{
LOGGER.info("Project Name"+job.getName());
}
}
I am developing a web app using Spring MVC. Simply put, a user uploads a file which can be of different types (.csv, .xls, .txt, .xml) and the application parses this file and extracts data for further processing. The problem is that I format of the file can change frequently. So there must be some way for quick and easy customization. Being a bit familiar with Talend, I decided to give it a shot and use it as ETL tool for my app. This short tutorial shows how to run Talend job from within Java app - http://www.talendforge.org/forum/viewtopic.php?id=2901
However, jobs created using Talend can read from/write to physical files, directories or databases. Is it possible to modify Talend job so that it can be given some Java object as a parameter and then return Java object just as usual Java methods?
For example something like:
String[] param = new String[]{"John Doe"};
String talendJobOutput = teaPot.myjob_0_1.myJob.main(param);
where teaPot.myjob_0_1.myJob is the talend job integrated into my app
I did something similar I guess. I created a mapping in tallend using tMap and exported this as talend job (java se programm). If you include the libraries of that job, you can run the talend job as described by others.
To pass arbitrary java objects you can use the following methods which are present in every talend job:
public Object getValueObject() {
return this.valueObject;
}
public void setValueObject(Object valueObject) {
this.valueObject = valueObject;
}
In your job you have to cast this object. e.g. you can put in a List of HashMaps and use Java reflection to populate rows. Use tJavaFlex or a custom component for that.
Using this method I can adjust the mapping of my data visually in Talend, but still use the generated code as library in my java application.
Now I better understand your willing, I think this is NOT possible because Talend's architecture is made like a standalone app, with a "main" entry point merely as does the Java main() method :
public String[][] runJob(String[] args) {
int exitCode = runJobInTOS(args);
String[][] bufferValue = new String[][] { { Integer.toString(exitCode) } };
return bufferValue;
}
That is to say : the Talend execution entry point only accepts a String array as input and doesn't returns anything as output (except as a system return code).
So, you won't be able link to Talend (generated) code as a library but as an isolated tool that you can only parameterize (using context vars, see my other response) before launching.
You can see that in Talend help center or forum the only integration described is as an "external" job execution ... :
Talend knowledge base "Calling a Talend Job from an external Java application" article
Talend Community Forum "Java Object to Talend" topic
May be you have to rethink the architecture of your application if you want to use Talend as the ETL tool for your purpose.
Now from Talend ETL point of view : if you want to parameter the execution environment of your Jobs (for exemple the physical directory of the uploaded files), you should use context variables that can be loaded at execution time from a configuration file as mentioned here :
https://help.talend.com/display/TalendOpenStudioforDataIntegrationUserGuide53EN/2.6.6+Context+settings