Edit Liquibase ChangeLog to include new changeset files at runtime - java

I have a code which is editing the changelog.json at runtime. However when i run liquibase.migrate() it is not picking up the latest changes.
ChangeLog file before the runtime
{
"databaseChangeLog" : [ ]
}
ChangeLog file during the runtime before execution of liquibase.update()
{
"databaseChangeLog" : [ {
"include" : {
"file" : "changesets/myFolder/changeset-0.sql"
}
} ]
}
Method used to add new changeset files to ChangeSetLog at runtime
public void addNewChangeSetToChangeLog(File file) throws IOException {
FileReader fileReader= new FileReader("src/changesets/DbChangelog.json");
BufferedReader br= new BufferedReader(fileReader);
StringBuilder jsonString =new StringBuilder();
String line;
while ((line = br.readLine()) != null) {
jsonString.append(line);
jsonString.append("\n");
}
JsonNode jsonNode=objectMapper.readTree(String.valueOf(jsonString));
ArrayNode arrayNode= (ArrayNode) jsonNode.get("databaseChangeLog");
JsonNode newChangeSetNode=objectMapper.readTree("{\"include\":{\"file\":\""+file.getPath()+"\"}}");
arrayNode.add(newChangeSetNode);
objectMapper.enable(SerializationFeature.INDENT_OUTPUT);
ObjectWriter writer = objectMapper.writer();
writer.writeValue(new File("src/changesets/DbChangelog.json"), jsonNode);
br.close();
fileReader.close();
}
I have tried
getting instance of liquibase class at runtime using class.getInstance()
Taking the location of ChangeLog at runtime via userinput so that it is unknown at compile time
Following is method using to call liquibase update
public void execute(Connection connection) throws LiquibaseException, IOException {
Database database = DatabaseFactory.
getInstance().
findCorrectDatabaseImplementation(new JdbcConnection(connection));
LiquibaseUtils liquibaseUtils=new LiquibaseUtils();
liquibase.Liquibase liquibase = new liquibase.Liquibase("changesets/DbChangelog.json", new ClassLoaderResourceAccessor(), database);
liquibase.update(new Contexts(), new LabelExpression());
}

if you have an edited query for alter table or add/update data, you must declare it in new , so when liquibase querying databasechangelog on your database, and it found new changeset, liquibase will execute it.
in your case, add new file include changeset-1.sql with new query.

Related

How to get SpringBoot to find my file in the src/main/resources directory

I have the following class (below). The file corresponding to vocabLookupFile is found when in the root directory of my SpringBoot project. However, I really want it in the src/main/resources directory of the project. With the below setup, it is not found there. By the way, the LookupMapper component is autowired in a #Service class, and other than not finding the file in src/main/resources, it works fine.
I am hoping someone can tell me how to modify the below so it can be found there. Thanks for any ideas.
#Component
public class LookupMapper {
public HashMap<String, LookUp> entry = new HashMap<>();
#Autowired
public LookupMapper(#Value("${vocab.lookup.mapper}") String vocabLookupFile) throws IOException {
try (CSVReader csvReader = new CSVReader(new FileReader(vocabLookupFile))) {
String[] values = null;
while ((values = csvReader.readNext()) != null) {
LookUp lookUp = new LookUp(values[1], Boolean.parseBoolean(values[2]));
this.entry.put(values[0].toUpperCase(), lookUp);
}
}
}
}
as per Mark Rotteveel, suggestion, with my file in the resource directory, in general, I need a solution that could retrieve the file from the context of the jar (and those things in the jar are considered "resources"). I used Classloader to get the resource as a stream. So the below works for me. Thanks to Mark.
#Component
public class LookupMapper {
public HashMap<String, LookUp> entry = new HashMap<>();
#Autowired
public LookupMapper(#Value("${vocab.lookup.mapper}") String vocabLookupFile) throws IOException {
ClassLoader classLoader = getClass().getClassLoader();
try (CSVReader csvReader = new CSVReader(new InputStreamReader(classLoader.getResourceAsStream(vocabLookupFile)))) {
String[] values = null;
while ((values = csvReader.readNext()) != null) {
LookUp lookUp = new LookUp(values[1], Boolean.parseBoolean(values[2]));
this.entry.put(values[0].toUpperCase(), lookUp);
}
}
}
}

How to parse a big rdf file in rdf4j

I want to parse a huge file in RDF4J using the following code but I get an exception due to parser limit;
public class ConvertOntology {
public static void main(String[] args) throws RDFParseException, RDFHandlerException, IOException {
String file = "swetodblp_april_2008.rdf";
File initialFile = new File(file);
InputStream input = new FileInputStream(initialFile);
RDFParser parser = Rio.createParser(RDFFormat.RDFXML);
parser.setPreserveBNodeIDs(true);
Model model = new LinkedHashModel();
parser.setRDFHandler(new StatementCollector(model));
parser.parse(input, initialFile.getAbsolutePath());
FileOutputStream out = new FileOutputStream("swetodblp_april_2008.nt");
RDFWriter writer = Rio.createWriter(RDFFormat.TURTLE, out);
try {
writer.startRDF();
for (Statement st: model) {
writer.handleStatement(st);
}
writer.endRDF();
}
catch (RDFHandlerException e) {
}
finally {
out.close();
}
}
The parser has encountered more than "100,000" entity expansions in this document; this is the limit imposed by the application.
I execute my code as following as suggested on the RDF4J web site to set up the two parameters (as in the following command)
mvn -Djdk.xml.totalEntitySizeLimit=0 -DentityExpansionLimit=0 exec:java
any help please
The error is due to the Apache Xerces XML parser, rather than the default JDK XML parser.
So Just delete Xerces XML folder from you .m2 repository and the code works fine.

Why can't avro take the schema from the .avro file?

Here is the deserializer from tutorialspoint.
public class Deserialize {
public static void main(String args[]) throws Exception{
//Instantiating the Schema.Parser class.
Schema schema = new Schema.Parser().parse(new File("/home/Hadoop/Avro/schema/emp.avsc"));
DatumReader<GenericRecord> datumReader = new GenericDatumReader<GenericRecord>(schema);
DataFileReader<GenericRecord> dataFileReader = new DataFileReader<GenericRecord>(new File("/home/Hadoop/Avro_Work/without_code_gen/mydata.txt"), datumReader);
GenericRecord emp = null;
while (dataFileReader.hasNext()) {
emp = dataFileReader.next(emp);
System.out.println(emp);
}
System.out.println("hello");
}
}
My question is: If there is already a schema in the .avro file why do I have to pass the schema as well? I find it very inconvenient having to provide the schema in order to parse the file.
Avro requires two schemas for resolution - a reader schema and a writer schema.
The writer schema is included in the file.
And you can parse the schema out of the file
String filepath = ...;
DataFileReader<Void> reader = new DataFileReader<>(Util.openSeekableFromFS(filepath),
new GenericDatumReader<>());
System.out.println(reader.getSchema().toString(true));
This is how java -jar avro-tools.jar getschema works
And you may need the Util.openSeekableFromFS method since it seems to be package private

How do I run a java method against all files of a type within a gradle project?

I have a java application that generates xqDoc (similar to JavaDoc) against an XQuery (*.xqy) source file.
I have a maven project at: https://github.com/lcahlander/xqdoc-core.git
That I want to run the following java code against all .xqy files in src/main/ml-modules/root/**/*.xqy and place the results respectively in xqDoc/**/*.xml:
HashMap uriMap = new HashMap();
uriMap.put(XPathDriver.XPATH_PREFIX, XPathDriver.XPATH_URI);
InputStream is = Files.newInputStream(Paths.get(cmd.getOptionValue("f")));
controller = new XQDocController(XQDocController.JUL2017);
controller.setPredefinedFunctionNamespaces(uriMap);
XQDocPayload payload = controller.process(is, "");
DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance();
DocumentBuilder db = dbf.newDocumentBuilder();
InputSource isOut = new InputSource();
isOut.setCharacterStream(new StringReader(payload.getXQDocXML()));
Document doc = db.parse(isOut);
The xqDoc parser could also be run from the command line as
java -jar xqdoc-core-0.8-jar-with-dependencies.jar -Dfn=http://www.w3.org/2003/05/xpath-functions -Dxdmp=http://marklogic.com/xdmp -f filepath
I want to create the gradle task generateXQDoc
Some thing like this should work (untested). You can adjust the hard-coded paths to use project properties, but should be enough to demonstrate how to iterate over each file in the fileset and execute
task generateXQDoc {
description = 'Generate XQDocs'
doLast {
def sourceDir = 'src/main/ml-modules'
File targetDir = new File('xqDoc')
HashMap uriMap = new HashMap();
uriMap.put(XPathDriver.XPATH_PREFIX, XPathDriver.XPATH_URI);
controller = new XQDocController(XQDocController.JUL2017);
controller.setPredefinedFunctionNamespaces(uriMap);
DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance();
DocumentBuilder db = dbf.newDocumentBuilder();
def xqueryFiles = fileTree(dir: sourceDir, include: '**/*.xq*')
xqueryFiles.each { file ->
InputStream is = Files.newInputStream(file));
XQDocPayload payload = controller.process(is, "");
String relativePath = new File(sourceDir).toURI().relativize(file.toURI()).getPath();
File outputFile = new File(targetDir, relativePath)
outputFile.parentFile.mkdirs()
outputFile.write(payload.getXQDocXML())
}
}
}
This is what I ended up developing using the filtered copy task.
import org.apache.tools.ant.filters.BaseFilterReader
buildscript {
repositories {
jcenter()
}
dependencies {
classpath files('lib/xqdoc-1.9-jar-with-dependencies.jar')
}
}
plugins {
id "net.saliman.properties" version "1.4.6"
id "com.marklogic.ml-gradle" version "3.6.0"
}
repositories {
jcenter()
maven { url "http://developer.marklogic.com/maven2/" }
maven { url "http://repository.cloudera.com/artifactory/cloudera-repos/" }
}
configurations {
mlcp {
resolutionStrategy {
force "xml-apis:xml-apis:1.4.01"
}
}
}
dependencies {
mlcp "com.marklogic:mlcp:9.0.6"
mlcp files("marklogic/lib")
}
class XQDocFilter extends BaseFilterReader {
XQDocFilter(Reader input) {
super(new StringReader(new org.exquery.xqdoc.MarkLogicProcessor().process(input.text)))
}
}
/**
* Generate the xqDoc files from the XQuery source code
*/
task generateXQDocs(type: Copy) {
into 'xqDoc'
from 'src/main/ml-modules/root'
include '**/*.xqy'
rename { it - '.xqy' + '.xml' }
includeEmptyDirs = false
filter XQDocFilter
}
/**
* Deploy the xqDoc files to the content repository
*/
task importXQDoc(type: com.marklogic.gradle.task.MlcpTask) {
classpath = configurations.mlcp
command = "IMPORT"
database = "emh-accelerator-content"
input_file_path = "xqDoc"
output_collections = "xqdoc"
output_uri_replace = ".*xqDoc,'/xqDoc'"
document_type = "mixed"
}
And here is the Java class being called.
public class MarkLogicProcessor {
public String process(String txt) throws XQDocException, ParserConfigurationException, IOException, SAXException {
HashMap uriMap = new HashMap();
uriMap.put("fn", "http://www.w3.org/2003/05/xpath-functions");
uriMap.put("cts", "http://marklogic.com/cts"); // MarkLogic Server search functions (Core Text Services)
uriMap.put("dav", "DAV:"); // Used with WebDAV
uriMap.put("dbg", "http://marklogic.com/xdmp/debug"); // Debug Built-In functions
uriMap.put("dir", "http://marklogic.com/xdmp/directory"); // MarkLogic Server directory XML
uriMap.put("err", "http://www.w3.org/2005/xqt-errors"); // namespace for XQuery and XPath errors
uriMap.put("error", "http://marklogic.com/xdmp/error"); // MarkLogic Server error namespace
uriMap.put("local", "http://www.w3.org/2005/xquery-local-functions"); // local namespace for functions defined in main modules
uriMap.put("lock", "http://marklogic.com/xdmp/lock"); // MarkLogic Server locks
uriMap.put("map", "http://marklogic.com/xdmp/map"); // MarkLogic Server maps
uriMap.put("math", "http://marklogic.com/xdmp/math"); // math Built-In functions
uriMap.put("prof", "http://marklogic.com/xdmp/profile"); // profile Built-In functions
uriMap.put("prop", "http://marklogic.com/xdmp/property"); // MarkLogic Server properties
uriMap.put("sec", "http://marklogic.com/xdmp/security"); // security Built-In functions
uriMap.put("sem", "http://marklogic.com/semantics"); // semantic Built-In functions
uriMap.put("spell", "http://marklogic.com/xdmp/spell"); // spelling correction functions
uriMap.put("xdmp", "http://marklogic.com/xdmp"); // MarkLogic Server Built-In functions
uriMap.put("xml", "http://www.w3.org/XML/1998/namespace"); // XML namespace
uriMap.put("xmlns", "http://www.w3.org/2000/xmlns/"); // xmlns namespace
uriMap.put("xqe", "http://marklogic.com/xqe"); // deprecated MarkLogic Server xqe namespace
uriMap.put("xqterr", "http://www.w3.org/2005/xqt-errors"); // XQuery test suite errors (same as err)
uriMap.put("xs", "http://www.w3.org/2001/XMLSchema"); // XML Schema namespace
ANTLRInputStream inputStream = new ANTLRInputStream(txt);
XQueryLexer markupLexer = new XQueryLexer(inputStream);
CommonTokenStream commonTokenStream = new CommonTokenStream(markupLexer);
XQueryParser markupParser = new XQueryParser(commonTokenStream);
XQueryParser.ModuleContext fileContext = markupParser.module();
StringBuffer buffer = new StringBuffer();
XQueryVisitor visitor = new XQueryVisitor(buffer, uriMap);
visitor.visit(fileContext);
return DocumentUtility.getStringFromDoc(DocumentUtility.getDocumentFromBuffer(buffer));
}
}
The xqDoc codebase is here https://github.com/lcahlander/xqdoc
The code to display the xqDoc documents is here https://github.com/lcahlander/marklogic-xqdoc-display

java.lang.NoClassDefFoundError: math/geom2d/line/LinearShape2D (activiti)

i tray to convert bpmn2.0 file to JSON , but i have this error :
java.lang.NoClassDefFoundError: **math/geom2d/line/LinearShape2D**
my code :
public void convertXmlToJson() throws Exception {
XMLStreamReader streamReader = null ;
BpmnXMLConverter bpmnXMLConverter = new BpmnXMLConverter();
XMLInputFactory factory = XMLInputFactory.newInstance();
//get Reader connected to XML input from filename
Reader reader = new FileReader(filename);
streamReader = factory.createXMLStreamReader(reader);
ObjectNode node = new BpmnJsonConverter().convertToJson(bpmnXMLConverter.convertToBpmnModel(streamReader));
node.toString();
}
Well, one of your JARs in the build path is trying to load the class math.geom2d.line.LinearShape2D - but it is not in your build path, so it can not be found.
Add the jar with this class to the build path and it should work.
Seems like you need this jar:
http://geom-java.sourceforge.net/
http://geom-java.sourceforge.net/api/math/geom2d/line/class-use/LinearShape2D.html

Categories

Resources