Design generic process using template pattern - java

I have a routine that I repeatedly doing for many projects and I want to generalized it. I used iText for PDF manipulation.
Let say that I have 2000 PDFs inside a folder, and I need to zip these together. Let say the limit is 1000 PDFs per zip. So the name of the zip would follow this rule: job name + job sequence. For example, the zip name of the first 1000 PDF would be XNKXMN + AA and the second zip name would be XNKXMN + AB. Before zipping these PDFs, I need to add some text to each PDF. Text look something like this job name + job sequence + pdf sequence. So the first PDF inside the first zip will have this text XNKXMN + AA + 000001, and the one after that is XNKXMN + AA + 000002. Here is my attempt
First I have abstract clas GenericText that represent my text.
public abstract class GenericText {
private float x;
private float y;
private float rotation;
/**
* Since the text that the user want to insert onto the Pdf might vary
* from page to page, or from logical document to logical document, we allow
* the user to write their own implementation of the text. To give the user enough
* flexibility, we give them the reference to the physical page index, the logical page index.
* #param physcialPage The actual page number that the user current looking at
* #param logicalPage A Pdf might contain multiples sub-documents, <code>logicalPage</code>
* tell the user which logical sub-document the system currently looking at
*/
public abstract String generateText(int physicalPage, int logicalPage);
GenericText(float x, float y, float rotation){
this.x = x;
...
}
}
JobGenerator.java: my generic API to do what I describe above
public String generatePrintJob(List<File> pdfList, String outputPath,
String printName, String seq, List<GenericText> textList, int maxSize)
for (int currentPdfDocument = 0; currentPdfDocument < pdfList.size(); currentPdfDocument++) {
File pdf = pdfList.get(currentPdfDocument);
if (currentPdfDocument % maxSize != 0) {
if(textList != null && !textList.isEmpty()){
for(GenericText gt : textList){
String text = gt.generateText(currentPdfDocument, currentPdfDocument)
//Add the text content to the PDF using PdfReader and PdfWriter
}
}
...
}else{
//Close the current output stream and zip output stream
seq = Utils.getNextSeq(seq);
jobPath = outputPath + File.separator + printName + File.separator + seq + ".zip"
//Open new zip output stream with the new <code>jobPath</code>
}
}
}
So now in my main class I would just do this
final String printName = printNameLookup.get(baseOutputName);
String jobSeq = config.getPrintJobSeq();
final String seq = jobSeq;
GenericText keyline = new GenericText(90, 640, 0){
#Override
public String generateText(int physicalPage, int logicalPage) {
//if logicalPage = 1, Utils.right(String.valueOf(logicalPage), 6, '0') -> 000001
return printName + seq + " " + Utils.right(String.valueOf(logicalPage), 6, '0');
}
};
textList.add(keyline);
JobGenerator pjg = new JobGenerator();
pjg.generatePrintJob(...,..., printName, jobSeq, textList, 1000);
The problem that I am having with this design is that, even though I process archive the PDF into two zip correctly, the text is not correctly reflect. The print and the sequence does not change accordingly, it stay XNKXMN + AA for 2000 PDF instead of XNKXMN + AA for the first 1000 and change to XNKXMN + AB for the later 1000. There seems to be flawed in my design, please help
EDIT:
After looking at toto2 code, I see my problem. I create GenericText with the hope of adding text anywhere on the pdf page without affecting the basic logic of the process. However, the job sequence is by definition depending on the logic,as it need to increment if there are too many PDFs for one ZIP to handle (> maxSize). I need to rethink this.

When you create an anonymous GenerateText, the final seq which you use in the overridden generateText method is truly final and will always remain the value given at creation time. The update you carry on seq inside the else in generatePrintJob does nothing.
On a more general note, your code looks very complex and you should probably take a step back and do some major refactoring.
EDIT:
I would instead try something different, with no template method pattern:
int numberOfZipFiles =
(int) Math.ceil((double) pdfList.size() / maxSize);
for (int iZip = 0; iZip < numberOfZipFiles; iZip++) {
String batchSubName = generateBatchSubName(iZip); // gives AA, AB,...
for (int iFile = 0; iFile < maxSize; iFile++) {
int fileNumber = iZip * maxSize + iFile;
if (fileNumber >= pdfList.size()) // can happen for last batch
return;
String text = jobName + batchSubName + iFile;
... add "text" to pdfList.get(fileNumber)
}
}
However, you might also want to maintain the template pattern. In that case, I would keep the for-loops I wrote above, but I would change the generating method to genericText.generateText(iZip, iFile) where iZip = 0 gives AA and iZip = 1 gives AB, etc:
for (int iZip = 0; iZip < numberOfZipFiles; iZip++) {
for (int iFile = 0; iFile < maxSize; iFile++) {
int fileNumber = iZip * maxSize + iFile;
if (fileNumber >= pdfList.size()) // can happen for last batch
return;
String text = genericText.generateText(iZip, iFile);
... add "text" to pdfList.get(fileNumber)
}
}
It would be possible also to have genericText.generateText(fileNumber) which could itself decompose the fileNumber in AA000001, etc. But that would be somewhat dangerous because maxSize would be used in two different places and it might be bug prone to have duplicate data like that.

Related

JNA parse description strings from windows event log record

i'm using JNA to read some event logs delivered by my application. Im mostly interested in the description strings data.
I'm using the code below:
private static void readLog() {
Advapi32Util.EventLogIterator iter = new Advapi32Util.EventLogIterator("Application");
while (iter.hasNext()) {
Advapi32Util.EventLogRecord record = iter.next();
System.out.println("------------------------------------------------------------");
System.out.println(record.getRecordNumber()
+ ": Event ID: " + record.getInstanceId()
+ ", Event Type: " + record.getType()
+ ", Event Strings: " + Arrays.toString(record.getStrings())
+ ", Data: " + record.getRecord().toString());
System.out.println();
}
}
Example event my application produces:
<Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event">
<System>
<Provider Name="Microsoft-Windows-MyApp" Guid="{4d5ae6a1-c7c8-4e6d-b840-4d8080b42e1b}" />
<EventID>201</EventID>
<Version>0</Version>
<Level>2</Level>
<Task>2</Task>
<Opcode>30</Opcode>
<Keywords>0x4010000001000000</Keywords>
<TimeCreated SystemTime="2021-02-19T15:16:03.675690900Z" />
<EventRecordID>3622</EventRecordID>
<Correlation ActivityID="{e6ee2b3b-9b9a-4c9d-b39b-6c2bf2550000}" />
<Execution ProcessID="2108" ThreadID="8908" />
<Channel>Microsoft-Windows-MyApp/Operational</Channel>
<Computer>computer</Computer>
<Security UserID="S-1-5-20" />
</System>
<UserData>
<EventInfo xmlns="aag">
<Username>username</Username>
<IpAddress>127.0.0.1</IpAddress>
<AuthType>NTLM</AuthType>
<Resource />
<ConnectionProtocol>HTTP</ConnectionProtocol>
<ErrorCode>23003</ErrorCode>
</EventInfo>
</UserData>
</Event>
Other event UserData:
<UserData>
<EventInfo xmlns="aag">
<Username>otherUserName</Username>
<IpAddress>10.235.163.52:50427</IpAddress>
</EventInfo>
</UserData>
JNA provides event log records in EVENTLOGRECORD class which only contains methods to get only values of description strings. If i could get the record in XML format my problem would be gone.
Data in UserData is not always the same, it contains different values depending on the event type. I want to parse the data from UserData section to POJO (it can be just one POJO containing all available fields). I dont want to use fields order, because some events have different fields than other (as shown in example).
Is there any way to do this using xml tag names? I will consider even switching to other lang.
As I pointed out in my comment you need to render the event to get to the XML. Matthias Bläsing also pointed out that some sample code is available in the WevtapiTest test class in JNA.
Using that test class, I created the below program which reads the XML from the latest 50 events from the System log. Filtering events to what you want is left as an exercise for the reader.
public static void main(String[] args) {
EVT_HANDLE queryHandle = null;
// Requires elevation or shared access to the log.
String path = "C:\\Windows\\System32\\Winevt\\Logs\\System.evtx";
try {
queryHandle = Wevtapi.INSTANCE.EvtQuery(null, path, null, Winevt.EVT_QUERY_FLAGS.EvtQueryFilePath);
// Read 10 events at a time
int eventArraySize = 10;
int evtNextTimeout = 1000;
int arrayIndex = 0;
EVT_HANDLE[] eventArray = new EVT_HANDLE[eventArraySize];
IntByReference returned = new IntByReference();
while (Wevtapi.INSTANCE.EvtNext(queryHandle, eventArraySize, eventArray, evtNextTimeout, 0, returned)) {
Memory buff;
// This just needs to be 0. Kept same name from test sample
IntByReference propertyCount = new IntByReference();
Winevt.EVT_VARIANT evtVariant = new Winevt.EVT_VARIANT();
for (int i = 0; i < returned.getValue(); i++) {
buff = WevtapiUtil.EvtRender(null, eventArray[i], Winevt.EVT_RENDER_FLAGS.EvtRenderEventXml,
propertyCount);
// Output the XML!
System.out.println(buff.getWideString(0));
}
arrayIndex++;
// Quit after 5 x 10 events
if (arrayIndex >= 5) {
break;
}
}
if (Kernel32.INSTANCE.GetLastError() != WinError.ERROR_SUCCESS
&& Kernel32.INSTANCE.GetLastError() != WinError.ERROR_NO_MORE_ITEMS) {
throw new Win32Exception(Kernel32.INSTANCE.GetLastError());
}
} finally {
if (queryHandle != null) {
Wevtapi.INSTANCE.EvtClose(queryHandle);
}
}
}

how to figure out which character doesn't map to utf-8

I maintain a small java servlet-based webapp that presents forms for input, and writes the contents of those forms to MariaDB.
The app runs on a Linux box, although the users visit the webapp from Windows.
Some users paste text into these forms that was copied from MSWord docs, and when that happens, they get internal exceptions like the following:
Caused by: org.mariadb.jdbc.internal.util.dao.QueryException:
Incorrect string value: '\xC2\x96 for...' for column 'ssimpact' at row
1
For instance, I tested it with text like the following:
Project – for
Where the dash is a "long dash" from the MSWord document.
I don't think it's possible to convert the wayward characters in this text to the "correct" characters, so I'm trying to figure out how to produce a reasonable error message that shows a substring of the bad text in question, along with the index of the first bad character.
I noticed postings like this: How to determine if a String contains invalid encoded characters .
I thought this would get me close, but it's not quite working.
I'm trying to use the following method:
private int findUnmappableCharIndex(String entireString) {
int charIndex;
for (charIndex = 0; charIndex < entireString.length(); ++ charIndex) {
String currentChar = entireString.substring(charIndex, charIndex + 1);
CharBuffer out = CharBuffer.wrap(new char[currentChar.length()]);
CharsetDecoder decoder = Charset.forName("utf-8").newDecoder();
CoderResult result = decoder.decode(ByteBuffer.wrap(currentChar.getBytes()), out, true);
if (result.isError() || result.isOverflow() || result.isUnderflow() || result.isMalformed() || result.isUnmappable()) {
break;
}
CoderResult flushResult = decoder.flush(out);
if (flushResult.isOverflow()) {
break;
}
}
if (charIndex == entireString.length() + 1) {
charIndex = -1;
}
return charIndex;
}
This doesn't work. I get "underflow" on the first character, which is a valid character. I'm sure I don't fully understand the decoder mechanism.

Jsoup fetches wrong results

Working with Jsoup. The URL works well on the browser. But it fetches wrong result on the server. I set the maxBodySize "0" as well. But it still only gets first few tags. Moreover the data is even different from the browser one. Can you guys give me a hand?
String queryUrl = "http://www.juso.go.kr/addrlink/addrLinkApi.do?confmKey=U01TX0FVVEgyMDE3MDYyODE0MTYyMzIyMTcw&currentPage=1&countPerPage=20&keyword=연남동";
Document document = Jsoup.connect(queryUrl).maxBodySize(0).get();
Are you aware that this endpoint returns paginated data? Your URL asks for 20 entries from the first page. I assume that the order of these entries is not specified so you can get different data each time you call this endpoint - check if there is a URL parameter that can determine specific sort order.
Anyway to read all 2037 entries you have to do it sequentially. Examine following code:
final String baseUrl = "http://www.juso.go.kr/addrlink/addrLinkApi.do";
final String key = "U01TX0FVVEgyMDE3MDYyODE0MTYyMzIyMTcw";
final String keyword = "연남동";
final int perPage = 100;
int currentPage = 1;
while (true) {
System.out.println("Downloading data from page " + currentPage);
final String url = String.format("%s?confmKey=%s&currentPage=%d&countPerPage=%d&keyword=%s", baseUrl, key, currentPage, perPage, keyword);
final Document document = Jsoup.connect(url).maxBodySize(0).get();
final Elements jusos = document.getElementsByTag("juso");
System.out.println("Found " + jusos.size() + " juso entries");
if (jusos.size() == 0) {
break;
}
currentPage += 1;
}
In this case we are asking for 100 entries per page (that's the maximum number this endpoint supports) and we call it 21 times, as long as calling for a specific page return any <juso> element. Hope it helps solving your problem.

How to export repeat grid layout data to Excel using pzRDExportWrapper in Pega 7.1.8?

I am trying to export repeat grid data to excel. To do this, I have provided a button which runs "MyCustomActivity" activity via clicking. The button is placed above the grid in the same layout. It also worth pointing out that I am utulizing an article as a guide to configure. According to the guide my "MyCustomActivity" activity contains two steps:
Method: Property-Set, Method Parameters: Param.exportmode = "excel"
Method: Call pzRDExportWrapper. And I pass current parameters (There is only one from the 1st step).
But after I had got an issue I have changed the 2nd step by Call Rule-Obj-Report-Definition.pzRDExportWrapper
But as you have already understood the solution doesn't work. I have checked the log files and found interesting error:
2017-04-11 21:08:27,992 [ WebContainer : 4] [OpenPortal] [ ] [ MyFW:01.01.02] (ctionWrapper._baseclass.Action) ERROR as1|172.22.254.110 bar - Activity 'MyCustomActivity' failed to execute; Failed to find a 'RULE-OBJ-ACTIVITY' with the name 'PZRESOLVECOPYFILTERS' that applies to 'COM-FW-MyFW-Work'. There were 3 rules with this name in the rulebase, but none matched this request. The 3 rules named 'PZRESOLVECOPYFILTERS' defined in the rulebase are:
2017-04-11 21:08:42,807 [ WebContainer : 4] [TABTHREAD1] [ ] [ MyFW:01.01.02] (fileSetup.Code_Security.Action) ERROR as1|172.22.254.110 bar - External authentication failed:
If someone have any suggestions and share some, I will appreciate it.
Thank you.
I wanted to provide a functionality of exporting retrieved works to a CSV file. The functionality should has a feature to choose fields to retrieve, all results should be in Ukrainian and be able to use any SearchFilter Pages and Report Definition rules.
At a User Portal I have two sections: the first section contains text fields and a Search button, and a section with a Repeat Grid to display results. The textfields are used to filter results and they use a page Org-Div-Work-SearchFilter.
I made a custom parser to csv. I created two activities and wrote some Java code. I should mention that I took some code from the pzPDExportWrapper.
The activities are:
ExportToCSV - takes parameters from users, gets data, invokes the ConvertResultsToCSV;
ConvertResultsToCSV - converts retrieved data to a .CSV file.
Configurations of the ExportToCSV activity:
The Pages And Classes tab:
ReportDefinition is an object of a certain Report Definition.
SearchFilter is a Page with values inputted by user.
ReportDefinitionResults is a list of retrieved works to export.
ReportDefinitionResults.pxResults denotes a type of a certain work.
The Parameters tab:
FileName is a name of a generated file
ColumnsNames names of columns separated by comma. If the parameter is empty then CSVProperties is exported.
CSVProperties is a props to display in a spreadsheet separated by comma.
SearchPageName is a name of a page to filter results.
ReportDefinitionName is a RD's name used to retrieve results.
ReportDefinitionClass is a class of utilized report definition.
The Step tab:
Lets look through the steps:
1. Get an SearchFilte Page with a name from a Parameter with populated fields:
2. If SearchFilter is not Empty, call a Data Transform to convert SearchFilter's properties to Paramemer properties:
A fragment of the data Transform:
3. Gets an object of a Report Definition
4. Set parameters for the Report Definition
5. Invoke the Report Definition and save results to ReportDefinitionResults:
6. Invoke the ConvertResultsToCSV activity:
7. Delete the result page:
The overview of the ConvertResultsToCSV activity.
The Parameters tab if the ConvertResultsToCSV activity:
CSVProperties are the properties to retrieve and export.
ColumnsNames are names of columns to display.
PageListProperty a name of the property to be read in the primay page
FileName the name of generated file. Can be empty.
AppendTimeStampToFileName - if true, a time of the file generation.
CSVString a string of generated CSV to be saved to a file.
FileName a name of a file.
listSeperator is always a semicolon to separate fields.
Lets skim all the steps in the activity:
Get a localization from user settings (commented):
In theory it is able to support a localization in many languages.
Set always "uk" (Ukrainian) localization.
Get a separator according to localization. It is always a semicolon in Ukrainian, English and Russian. It is required to check in other languages.
The step contains Java code, which form a CSV string:
StringBuffer csvContent = new StringBuffer(); // a content of buffer
String pageListProp = tools.getParamValue("PageListProperty");
ClipboardProperty resultsProp = myStepPage.getProperty(pageListProp);
// fill the properties names list
java.util.List<String> propertiesNames = new java.util.LinkedList<String>(); // names of properties which values display in csv
String csvProps = tools.getParamValue("CSVProperties");
propertiesNames = java.util.Arrays.asList(csvProps.split(","));
// get user's colums names
java.util.List<String> columnsNames = new java.util.LinkedList<String>();
String CSVDisplayProps = tools.getParamValue("ColumnsNames");
if (!CSVDisplayProps.isEmpty()) {
columnsNames = java.util.Arrays.asList(CSVDisplayProps.split(","));
} else {
columnsNames.addAll(propertiesNames);
}
// add columns to csv file
Iterator columnsIter = columnsNames.iterator();
while (columnsIter.hasNext()) {
csvContent.append(columnsIter.next().toString());
if (columnsIter.hasNext()){
csvContent.append(listSeperator); // listSeperator - local variable
}
}
csvContent.append("\r");
for (int i = 1; i <= resultsProp.size(); i++) {
ClipboardPage propPage = resultsProp.getPageValue(i);
Iterator iterator = propertiesNames.iterator();
int propTypeIndex = 0;
while (iterator.hasNext()) {
ClipboardProperty clipProp = propPage.getIfPresent((iterator.next()).toString());
String propValue = "";
if(clipProp != null && !clipProp.isEmpty()) {
char propType = clipProp.getType();
propValue = clipProp.getStringValue();
if (propType == ImmutablePropertyInfo.TYPE_DATE) {
DateTimeUtils dtu = ThreadContainer.get().getDateTimeUtils();
long mills = dtu.parseDateString(propValue);
java.util.Date date = new Date(mills);
String sdate = dtu.formatDateTimeStamp(date);
propValue = dtu.formatDateTime(sdate, "dd.MM.yyyy", "", "");
}
else if (propType == ImmutablePropertyInfo.TYPE_DATETIME) {
DateTimeUtils dtu = ThreadContainer.get().getDateTimeUtils();
propValue = dtu.formatDateTime(propValue, "dd.MM.yyyy HH:mm", "", "");
}
else if ((propType == ImmutablePropertyInfo.TYPE_DECIMAL)) {
propValue = PRNumberFormat.format(localeCode,PRNumberFormat.DEFAULT_DECIMAL, false, null, new BigDecimal(propValue));
}
else if (propType == ImmutablePropertyInfo.TYPE_DOUBLE) {
propValue = PRNumberFormat.format(localeCode,PRNumberFormat.DEFAULT_DECIMAL, false, null, Double.parseDouble(propValue));
}
else if (propType == ImmutablePropertyInfo.TYPE_TEXT) {
propValue = clipProp.getLocalizedText();
}
else if (propType == ImmutablePropertyInfo.TYPE_INTEGER) {
Integer intPropValue = Integer.parseInt(propValue);
if (intPropValue < 0) {
propValue = new String();
}
}
}
if(propValue.contains(listSeperator)){
csvContent.append("\""+propValue+"\"");
} else {
csvContent.append(propValue);
}
if(iterator.hasNext()){
csvContent.append(listSeperator);
}
propTypeIndex++;
}
csvContent.append("\r");
}
CSVString = csvContent.toString();
5. This step forms and save a file in server's catalog tree
char sep = PRFile.separatorChar;
String exportPath= tools.getProperty("pxProcess.pxServiceExportPath").getStringValue();
DateTimeUtils dtu = ThreadContainer.get().getDateTimeUtils();
String fileNameParam = tools.getParamValue("FileName");
if(fileNameParam.equals("")){
fileNameParam = "RecordsToCSV";
}
//append a time stamp
Boolean appendTimeStamp = tools.getParamAsBoolean(ImmutablePropertyInfo.TYPE_TRUEFALSE,"AppendTimeStampToFileName");
FileName += fileNameParam;
if(appendTimeStamp) {
FileName += "_";
String currentDateTime = dtu.getCurrentTimeStamp();
currentDateTime = dtu.formatDateTime(currentDateTime, "HH-mm-ss_dd.MM.yyyy", "", "");
FileName += currentDateTime;
}
//append a file format
FileName += ".csv";
String strSQLfullPath = exportPath + sep + FileName;
PRFile f = new PRFile(strSQLfullPath);
PROutputStream stream = null;
PRWriter out = null;
try {
// Create file
stream = new PROutputStream(f);
out = new PRWriter(stream, "UTF-8");
// Bug with Excel reading a file starting with 'ID' as SYLK file. If CSV starts with ID, prepend an empty space.
if(CSVString.startsWith("ID")){
CSVString=" "+CSVString;
}
out.write(CSVString);
} catch (Exception e) {
oLog.error("Error writing csv file: " + e.getMessage());
} finally {
try {
// Close the output stream
out.close();
} catch (Exception e) {
oLog.error("Error of closing a file stream: " + e.getMessage());
}
}
The last step calls #baseclass.DownloadFile to download the file:
Finally, we can post a button on some section or somewhere else and set up an Actions tab like this:
It also works fine inside "Refresh Section" action.
A possible result could be
Thanks for reading.

Is there an smart way to write a fixed length flat file?

Is there any framework/library to help writing fixed length flat files in java?
I want to write a collection of beans/entities into a flat file without worrying with convertions, padding, alignment, fillers, etcs
For example, I'd like to parse a bean like:
public class Entity{
String name = "name"; // length = 10; align left; fill with spaces
Integer id = 123; // length = 5; align left; fill with spaces
Integer serial = 321 // length = 5; align to right; fill with '0'
Date register = new Date();// length = 8; convert to yyyyMMdd
}
... into ...
name 123 0032120110505
mikhas 5000 0122120110504
superuser 1 0000120101231
...
You're not likely to encounter a framework that can cope with a "Legacy" system's format. In most cases, Legacy systems don't use standard formats, but frameworks expect them. As a maintainer of legacy COBOL systems and Java/Groovy convert, I encounter this mismatch frequently. "Worrying with conversions, padding, alignment, fillers, etcs" is primarily what you do when dealing with a legacy system. Of course, you can encapsulate some of it away into handy helpers. But most likely, you'll need to get real familiar with java.util.Formatter.
For example, you might use the Decorator pattern to create decorators to do the conversion. Below is a bit of groovy (easily convertible into Java):
class Entity{
String name = "name"; // length = 10; align left; fill with spaces
Integer id = 123; // length = 5; align left; fill with spaces
Integer serial = 321 // length = 5; align to right; fill with '0'
Date register = new Date();// length = 8; convert to yyyyMMdd
}
class EntityLegacyDecorator {
Entity d
EntityLegacyDecorator(Entity d) { this.d = d }
String asRecord() {
return String.format('%-10s%-5d%05d%tY%<tm%<td',
d.name,d.id,d.serial,d.register)
}
}
def e = new Entity(name: 'name', id: 123, serial: 321, register: new Date('2011/05/06'))
assert new EntityLegacyDecorator(e).asRecord() == 'name 123 0032120110506'
This is workable if you don't have too many of these and the objects aren't too complex. But pretty quickly the format string gets intolerable. Then you might want decorators for Date, like:
class DateYMD {
Date d
DateYMD(d) { this.d = d }
String toString() { return d.format('yyyyMMdd') }
}
so you can format with %s:
String asRecord() {
return String.format('%-10s%-5d%05d%s',
d.name,d.id,d.serial,new DateYMD(d.register))
}
But for significant number of bean properties, the string is still too gross, so you want something that understands columns and lengths that looks like the COBOL spec you were handed, so you'll write something like this:
class RecordBuilder {
final StringBuilder record
RecordBuilder(recordSize) {
record = new StringBuilder(recordSize)
record.setLength(recordSize)
}
def setField(pos,length,String s) {
record.replace(pos - 1, pos + length, s.padRight(length))
}
def setField(pos,length,Date d) {
setField(pos,length, new DateYMD(d).toString())
}
def setField(pos,length, Integer i, boolean padded) {
if (padded)
setField(pos,length, String.format("%0" + length + "d",i))
else
setField(pos,length, String.format("%-" + length + "d",i))
}
String toString() { record.toString() }
}
class EntityLegacyDecorator {
Entity d
EntityLegacyDecorator(Entity d) { this.d = d }
String asRecord() {
RecordBuilder record = new RecordBuilder(28)
record.setField(1,10,d.name)
record.setField(11,5,d.id,false)
record.setField(16,5,d.serial,true)
record.setField(21,8,d.register)
return record.toString()
}
}
After you've written enough setField() methods to handle you legacy system, you'll briefly consider posting it on GitHub as a "framework" so the next poor sap doesn't have to to it again. But then you'll consider all the ridiculous ways you've seen COBOL store a "date" (MMDDYY, YYMMDD, YYDDD, YYYYDDD) and numerics (assumed decimal, explicit decimal, sign as trailing separate or sign as leading floating character). Then you'll realize why nobody has produced a good framework for this and occasionally post bits of your production code into SO as an example... ;)
If you are still looking for a framework, check out BeanIO at http://www.beanio.org
uniVocity-parsers goes a long way to support tricky fixed-width formats, including lines with different fields, paddings, etc.
Check out this example to write imaginary client & accounts details. This uses a lookahead value to identify which format to use when writing a row:
FixedWidthFields accountFields = new FixedWidthFields();
accountFields.addField("ID", 10); //account ID has length of 10
accountFields.addField("Bank", 8); //bank name has length of 8
accountFields.addField("AccountNumber", 15); //etc
accountFields.addField("Swift", 12);
//Format for clients' records
FixedWidthFields clientFields = new FixedWidthFields();
clientFields.addField("Lookahead", 5); //clients have their lookahead in a separate column
clientFields.addField("ClientID", 15, FieldAlignment.RIGHT, '0'); //let's pad client ID's with leading zeroes.
clientFields.addField("Name", 20);
FixedWidthWriterSettings settings = new FixedWidthWriterSettings();
settings.getFormat().setLineSeparator("\n");
settings.getFormat().setPadding('_');
//If a record starts with C#, it's a client record, so we associate "C#" with the client format.
settings.addFormatForLookahead("C#", clientFields);
//Rows starting with #A should be written using the account format
settings.addFormatForLookahead("A#", accountFields);
StringWriter out = new StringWriter();
//Let's write
FixedWidthWriter writer = new FixedWidthWriter(out, settings);
writer.writeRow(new Object[]{"C#",23234, "Miss Foo"});
writer.writeRow(new Object[]{"A#23234", "HSBC", "123433-000", "HSBCAUS"});
writer.writeRow(new Object[]{"A#234", "HSBC", "222343-130", "HSBCCAD"});
writer.writeRow(new Object[]{"C#",322, "Mr Bar"});
writer.writeRow(new Object[]{"A#1234", "CITI", "213343-130", "CITICAD"});
writer.close();
System.out.println(out.toString());
The output will be:
C#___000000000023234Miss Foo____________
A#23234___HSBC____123433-000_____HSBCAUS_____
A#234_____HSBC____222343-130_____HSBCCAD_____
C#___000000000000322Mr Bar______________
A#1234____CITI____213343-130_____CITICAD_____
This is just a rough example. There are many other options available, including support for annotated java beans, which you can find here.
Disclosure: I'm the author of this library, it's open-source and free (Apache 2.0 License)
The library Fixedformat4j is a pretty neat tool to do exactly this: http://fixedformat4j.ancientprogramming.com/
Spring Batch has a FlatFileItemWriter, but that won't help you unless you use the whole Spring Batch API.
But apart from that, I'd say you just need a library that makes writing to files easy (unless you want to write the whole IO code yourself).
Two that come to mind are:
Guava
Files.write(stringData, file, Charsets.UTF_8);
Commons / IO
FileUtils.writeStringToFile(file, stringData, "UTF-8");
Don't know of any frame work but you can just use RandomAccessFile. You can position the file pointer to anywhere in the file to do your reads and writes.
I've just find a nice library that I'm using:
http://sourceforge.net/apps/trac/ffpojo/wiki
Very simple to configurate with XML or annotations!
A simple way to write beans/entities to a flat file is to use ObjectOutputStream.
public static void writeToFile(File file, Serializable object) throws IOException {
ObjectOutputStream oos = new ObjectOutputStream(new FileOutputStream(file));
oos.writeObject(object);
oos.close();
}
You can write to a fixed length flat file with
FileUtils.writeByteArrayToFile(new File(filename), new byte[length]);
You need to be more specific about what you want to do with the file. ;)
Try FFPOJO API as it has everything which you need to create a flat file with fixed lengths and also it will convert a file to an object and vice versa.
#PositionalRecord
public class CFTimeStamp {
String timeStamp;
public CFTimeStamp(String timeStamp) {
this.timeStamp = timeStamp;
}
#PositionalField(initialPosition = 1, finalPosition = 26, paddingAlign = PaddingAlign.RIGHT, paddingCharacter = '0')
public String getTimeStamp() {
return timeStamp;
}
#Override
public String toString() {
try {
FFPojoHelper ffPojo = FFPojoHelper.getInstance();
return ffPojo.parseToText(this);
} catch (FFPojoException ex) {
trsLogger.error(ex.getMessage(), ex);
}
return null;
}
}

Categories

Resources