Need to have console output stored in a database table - java

I created a code that counts the number of files in a zipfile. I am currently outputting the information onto the console. I am not sure how to get started in putting the outputted information into a database table in Microsoft SQL server. I essentially just need to have it output to a table in Microsoft SQL server instead of outputting it to the console. I have the code below:
import java.io.File;
import java.io.IOException;
import java.io.PrintStream;
import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.Paths;
import java.util.Enumeration;
import java.util.stream.Collectors;
import java.util.stream.Stream;
import java.util.zip.ZipEntry;
import java.util.zip.ZipFile;
public class KZF
{
static int findNumberOfFiles(File file) {
try (ZipFile zipFile = new ZipFile(file)) {
return (int) zipFile.stream().filter(z -> !z.isDirectory()).count();
} catch (Exception e) {
return -1;
}
}
static String createInfo(File file) {
int tot = findNumberOfFiles(file) - 1;
return (file.getName() + ": " + (tot >= 0 ? tot + " files" : "Error reading zip file"));
}
public static void main(String[] args) throws IOException {
String dirLocation = "C:\\Users\\username\\Documents\\Temp\\AllKo";
try (Stream<Path> files = Files.list(Paths.get(dirLocation))) {
files
.filter(path -> path.toFile().isFile())
.filter(path -> path.toString().toLowerCase().endsWith(".zip"))
.map(Path::toFile)
.map(KZF::createInfo)
.forEach(System.out::println);
}
}

To interact with SQL-based databases in java, the 'base layer' is a library called JDBC. This works as follows:
JDBC itself is part of plain java just as much as java.io.File is. However, this is just the basic API you use to interact with Databases, it doesn't include support for any specific database. Here is the API.
You then need a so-called JDBC Driver; you'd need the JDBC driver for Microsoft SQL server. This driver needs to be on the classpath when you run your app; you don't need to reference any particular class file or 'load' it, just... make sure it's on the classpath, that's all you need. This jar, if on the classpath, automatically tells the JDBC system about its existence, and the JDBC system will then use it when you ask the JDBC system to connect to your microsoft sql database. Hence, nothing required except for this to be present on the classpath.
JDBC is intentionally a convoluted and hard to use API from the point of view of interacting with DBs from plain jane java code: It's the lowest denominator; the 'machine code' aspect. It needs to expose all possible DB functionality for all possible SQL-based database engines and give you the tools to run it in all possible modes. Thus, I strongly advise you not to program direct JDBC. Instead, use libraries that are built on top of JDBC and give you a nice, easy to understand API: Use JDBI or JOOQ, but I believe JOOQ is not free unless you use it with a free DB, and microsoft SQL isn't free, so be aware you may need to pay a license fee for JOOQ. JDBI is free.
In other words:
in your build system, add the com.microsoft.sqlserver :: mssql-jdbc :: 9.2.1.jre11 dependency.
in your build system, add the org.jdbi :: jdbi3-core :: 3.20.0 dependency.
Read the Microsoft SQL Server JDBC connector URL docs on how to build the so-called 'JDBC URL' which tells java how to connect to your microsoft SQL server.
Read the JDBI documentation. It's not hard - right on the front page you see the basic layout for how to send INSERT statements. (the URL you learned about in the previous doc? You pass that to the Jdbi.create() call).

Much easier, you can use the entries() method to get an Enumeration of the ZipEntry-s in the zip-file, and check each one to see if it isDirectory():
int countRegularFiles(final ZipFile zipFile) {
final Enumeration<? extends ZipEntry> entries = zipFile.entries();
int numRegularFiles = 0;
while (entries.hasMoreElements()) {
if (! entries.nextElement().isDirectory()) {
++numRegularFiles;
}
}
return numRegularFiles;
}

Related

Unable to load AWS credentials Error when accessing dynamoDB (local) with java

I have installed the local version of dynamoDB, and set up a maven java project to access the DB. When i run the code i get the below error. Since i have installed the server in local (it runs son localhost:8000), i dont have any credentials to provide...
Any idea how to solve it?
import java.util.Iterator;
import org.apache.commons.cli.ParseException;
import com.amazonaws.ClientConfiguration;
import com.amazonaws.client.builder.AwsClientBuilder;
import com.amazonaws.services.dynamodbv2.AmazonDynamoDB;
import com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient;
import com.amazonaws.services.dynamodbv2.AmazonDynamoDBClientBuilder;
import com.amazonaws.services.dynamodbv2.document.DynamoDB;
import com.amazonaws.services.dynamodbv2.document.Table;
import com.amazonaws.services.dynamodbv2.document.TableCollection;
import com.amazonaws.services.dynamodbv2.exceptions.DynamoDBLocalServiceException;
import com.amazonaws.services.dynamodbv2.local.embedded.DynamoDBEmbedded;
import com.amazonaws.services.dynamodbv2.local.main.ServerRunner;
import com.amazonaws.services.dynamodbv2.local.server.DynamoDBProxyServer;
import com.amazonaws.services.dynamodbv2.model.ListTablesResult;
public class Test {
public static void main(String[] args) {
AmazonDynamoDB client = AmazonDynamoDBClientBuilder.standard().withEndpointConfiguration(
// we can use any region here
new AwsClientBuilder.EndpointConfiguration("http://localhost:8000", "us-west-2"))
.build();
DynamoDB dynamoDB = new DynamoDB(client);
//dynamoDB.listTables();
TableCollection<ListTablesResult> list = dynamoDB.listTables();
Iterator<Table> iterator = list.iterator();
System.out.println("Listing table names");
while (iterator.hasNext()) {
Table table = iterator.next();
System.out.println(table.getTableName());
}
System.out.println("over");
}
}
Error is
Exception in thread "main" com.amazonaws.SdkClientException: Unable to load AWS credentials from any provider in the chain
at com.amazonaws.auth.AWSCredentialsProviderChain.getCredentials(AWSCredentialsProviderChain.java:131)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.getCredentialsFromContext(AmazonHttpClient.java:1115)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.runBeforeRequestHandlers(AmazonHttpClient.java:764)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:728)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:721)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:704)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:672)
at com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:654)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:518)
at com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient.doInvoke(AmazonDynamoDBClient.java:1831)
at com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient.invoke(AmazonDynamoDBClient.java:1807)
at com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient.listTables(AmazonDynamoDBClient.java:1123)
at com.amazonaws.services.dynamodbv2.document.internal.ListTablesCollection.firstPage(ListTablesCollection.java:46)
at com.amazonaws.services.dynamodbv2.document.internal.PageIterator.next(PageIterator.java:45)
at com.amazonaws.services.dynamodbv2.document.internal.IteratorSupport.nextResource(IteratorSupport.java:87)
at com.amazonaws.services.dynamodbv2.document.internal.IteratorSupport.hasNext(IteratorSupport.java:55)
Stumbled upon this when I was searching for the same problem. After half a day of wasting time, managed to solve the issue. Posting here in case anyone ever stumbles upon such situation again.
And the worst part? The solution I had to pierce through and experiment after going through thousands of pages, you would expect there to be some info about the problem. At the least, the documentation should have mentioned some note!
The solution :
Configuring AWS Credentials : go through that to set up some credential. Configure it as any random thing, it doesn't really matter.
Yeah, this was it!!
And for those ppl who are still lazy (like me ;-) ) to go through that, just follow the easiest of the methods :
Open the default config file : ~/.aws/credentials
Change the values in it to anything (like empty string here)
[default]
aws_access_key_id=''
aws_secret_access_key=''
Run the program. You can thank me later :D
I had a similar issue. To get around when running tests locally, I have added few lines to set java System properties.
System.setProperty(ACCESS_KEY_SYSTEM_PROPERTY, "accesskey"); System.setProperty(SECRET_KEY_SYSTEM_PROPERTY, "secretkey");
As per the Amazon Web Services documentation, Working with AWS Credentials
Official supported Java system properties are:
aws.accessKeyId
aws.secretKey
The following sets these system properties:
System.setProperty("aws.accessKeyId", "super-access-key");
System.setProperty("aws.secretKey", "super-secret-key");
This needs to be set before creating the Amazon DynamoDB client.

"unsupported collating sort order" when trying to read from Access using Jackcess (Java)

I'm currently working on a Java application with the purpuose of reading a Microsoft Access file using Jackcess open source library. The Java application will later present the tables contained in the Access file.
Here is my code so far:
public class Test {
public static void main(String[] args) throws IOException {
File file = new File("\\\\student.local\\Files\\Home\\nat12mja\\Downloads\\Testdoc.accdb");
Database db = DatabaseBuilder.open(file);
Table table = db.getTable("Table1");
for(Row row : table){
System.out.println(row.get("Field1"));
}
}
}
These are my imports:
import java.io.File;
import java.io.IOException;
import com.healthmarketscience.jackcess.Database;
import com.healthmarketscience.jackcess.DatabaseBuilder;
Also, I've added these Jar files to my referenced librarys:
commons-lang-2.4.jar, commons-logging-1.1.jar, jackcess-2.0.2.jar
When I run my application I get this error message(The System.out.println() works as intended):
dec 21, 2013 1:54:27 EM com.healthmarketscience.jackcess.impl.IndexData setUnsupportedReason
WARNING: unsupported collating sort order SortOrder[1053(0)] for text index, making read-only
dec 21, 2013 1:54:27 EM com.healthmarketscience.jackcess.impl.DatabaseImpl readSystemCatalog
INFO: Could not find expected index on table MSysObjects
I've tested with older versions of the same Access file, but the problem persists.
Is this a library related problem? Or am I missing something else?
Jackcess only supports indexes on Text fields in an Access database when the database is using the "General" sort order (ref: here).
According to the related Microsoft Office support page:
To reset the sort order for an existing database, select the language you want to use and then run a compact operation on the database.
So, for Access 2010 that would presumably mean selecting File > Options from the Access ribbon bar, choosing "General" or "General - Legacy" for the "New database sort order" on the "General" tab, ...
... then performing a "Compact and Repair" on the database.
Note: If Windows is using a non-English locale then the procedure described above might not rectify the problem. See this answer for details.

Flyway - oracle PL/SQL procedures migration

What would be the preferable way to update schema_version table and execute modified PL/SQL packages/procedures in flyway without code duplication?
My example would require a class file be created for each PL/SQL code modicaition
public class V2_1__update_scripts extends AbstractMigration {
// update package and procedures
}
AbstractMigration class executes the files in db/update folder:
public abstract class AbstractMigration implements SpringJdbcMigration {
private static final Logger log = LoggerFactory.getLogger(AbstractMigration.class);
#Override
public void migrate(JdbcTemplate jdbcTemplate) throws Exception {
Resource packageFolder = new ClassPathResource("db/update");
Collection<File> files = FileUtils.listFiles(packageFolder.getFile(), new String[]{"sql"}, true);
for (File file : files ) {
log.info("Executing [{}]", file.getAbsolutePath());
String fileContents = FileUtils.readFileToString(file);
jdbcTemplate.execute(fileContents);
}
}
}
Is there any better way of executing PL/SQL code?
I wonder if it's better to duplicate the code into the standard migrations folder. It seems like with the given example you wouldn't then be able to migrate up to version N of the db, as some prior version would execute all the current version of the pl/sql. I'd be interested to see if you settled on a solution for this.
There is no built-in support or other command you have missed.
Of the top of my head, I would think about either the way you presented here or using a generator to produce new migration sql files after an SCM commit.
Let's see if someone else found a better solution.
The version of Flyway current at the time of this writing (v4.2.0) supports the notion of repeatable scripts designed specifically for such situations. Basically any script with a "Create or replace" semantic is a candidate.
Simply name your script as R__mypackage_body.sql or whatever prefix you wish for repeatable scripts. Please see Sql-based migrations and Repeatable migrations for further information.

Import SSJS script library using DXL in a database

We need to import a SSJS library in a database using DXL. For this we have written a Java Agent and its code goes something like this:
import lotus.domino.*;
public class JavaAgent extends AgentBase {
private DxlImporter importer = null;
public void NotesMain() {
try {
Session session = getSession();
AgentContext agentContext = session.getAgentContext();
String filename = "C:\\tempssjslib.xml";
Stream stream = session.createStream();
if (stream.open(filename) & (stream.getBytes() > 0)) {
Database importdb = session.getCurrentDatabase();
importer = session.createDxlImporter();
importer.setReplaceDbProperties(true);
importer.setReplicaRequiredForReplaceOrUpdate(false);
importer.setAclImportOption(DxlImporter.DXLIMPORTOPTION_REPLACE_ELSE_IGNORE);
importer.setDesignImportOption(DxlImporter.DXLIMPORTOPTION_REPLACE_ELSE_CREATE);
importer.importDxl(stream, importdb);
}
} catch (Exception e) {
e.printStackTrace();
}
finally {
try {
System.out.println(importer.getLog());
} catch (Exception e) {
e.printStackTrace();
}
}
}
}
The file C:\tempssjslib.xml contains a SSJS library which I created in Domino Designer and then exported using "Tools > DXL Utilities > Exporter" (for testing purpose). But when I run this agent library does not get imported in the database. There is no error in DxlImporter.getLog() also.
I tried similar procedure with XPages, Form, LotusScript script library and was successfully able to import them. But the same agent is not able to import SSJS library.
Is there something that I have missed in the code? Can we import SSJS library in database using DXL?
It looks like the exporter tool (or maybe even the DXLexporter) is not exporting all needed fields. If you manually add this inside the dxl file, just before the item name='$ServerJavaScriptLibrary'... line, it will succesfully import it.
<item name='$Flags'><text>.5834Q</text></item>
<item name='$TITLE'><text>...name of the SSJS library...</text></item>
If you print the imported note id and analyze that in an appropriate tool (Ytria or Notespeek) you'll see that the problem is with $Flags field.
I created a test SSJS library and $Flags field contains ".5834Q". But the imported one has "34Q" only.
I don't have the exact reference for those flags but it may be a good start. Manually overwriting this field works successfully but this flag may contain some valuable information.
It seems like a bug to me.
In addition YTria tool has a good reference about $flags field content.
Make your live easier and use the Import/Export plug-in found on OpenNTF: http://www.openntf.org/blogs/openntf.nsf/d6plinks/NHEF-7YAAF6 It has an ANT API, so you can automate operations. Needs Domino Designer, so it might not fit your use case. Alternatively (haven't checked): Did you have a look if webDAV exposes the script libraries?

Limit Derby Log File Size

Our app sends the contents of the derby.log file to our server whenever Apache Derby throws a SQLException in our app.
In order to get detailed logs, we are setting the 'derby.infolog.append' property to true.
However, we are noticing enormously large logs files since the logs also contain bootup output each time a connection is made to the database.
NOTE: we are using Derby in embedded mode.
Is there a way to have derby limit the total number of lines it logs to derby.log file?
For example, only logging the most recent 1000 lines of logs and then begin to overwrite the oldest entries.
Our goal is to get useful debugging info from end users but to prevent the log files from growing to unmanageable sizes.
Thanks in advance,
Jim
I'm not that familiar with derby but I couldn't find an "easy" way to do this.
But there are some derby properties you could set to implement this yourself.
Check these
derby.stream.error.field
derby.stream.error.file
derby.stream.error.method
derby.stream.error.logSeverityLevel
So I imagine you writing some class which subclasses java.io.OutputStream or java.io.Writer and then you either
implements the wanted behaviour or
do it similar to How do I limit the size of log file? + wrap as one of the above or
you ripp-off get some ideas for a RollingFileLoggerClass from some other project (RollingFileAppender log4j, RollingFileWriter clapper, ...)
You can create a custom logging class, and specify this using derby.stream.error.field as mentioned above. The logging class doesn't have to implemented as a file - if you will be limiting the size of the logging data you can easily hold it in memory.
The second advantage to this is that when a problem is encountered, you have a great deal of flexibility in what to do with the logging data. Perhaps compress (or encrypt) the data and automatically open a ticket in your help system (as an example).
Here's an example of a very simple custom logging solution:
import java.io.CharArrayWriter;
public class CustomLog {
public static CharArrayWriter log = new CharArrayWriter();
public static void dump() {
System.out.println(log.toString());
}
}
You can replace the CharArrayWriter with a size limited buffer of some sort, and add an implementation of dump() to do what you will with the resulting log data.
A short example program demonstrating this follows:
import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.SQLException;
import java.sql.Statement;
public class DerbyLoggingExample {
public DerbyLoggingExample() {
System.setProperty( "derby.stream.error.field", "CustomLog.log");
String driver = "org.apache.derby.jdbc.EmbeddedDriver";
String dbName = "logdemoDB";
String connectionURL = "jdbc:derby:" + dbName + ";create=true";
String createCommand = "create table test_table ("
+ "test_id int not null generated always as identity, "
+ "test_name varchar(20)"
+ ")";
try {
Class.forName(driver);
}
catch( java.lang.ClassNotFoundException e ) {
System.out.println( "Could not load Derby driver." );
return;
}
Connection conn = null;
Statement statement = null;
try {
conn = DriverManager.getConnection(connectionURL);
statement = conn.createStatement();
statement.execute(createCommand);
}
catch( SQLException sqle ) {
sqle.printStackTrace();
System.out.println( "SQLException encountered. Dumping log.");
CustomLog.dump();
return;
}
finally {
try {
statement.close();
conn.close();
}
catch( SQLException e ) {
// Do nothing.
}
}
System.out.println( "Processing done. Dumping log." );
CustomLog.dump();
}
public static void main(String[] argv) {
DerbyLoggingExample thisApp = new DerbyLoggingExample();
}
}
Another way to handle this would be to write your own code which rotates, truncates, compresses, or otherwise pares down the derby.log file in-between runs of Derby.
You don't mention what version of Derby you're running, but I thought the line-per-connection output was removed in a more recent release. Or perhaps it was only removed from Network Server output rather than from derby.log output?
If it's the line-per-connection output that is swelling your derby.log, then you might consider using connection pooling techniques so that you don't make so many connections. Generally you can hang on to connections for the lifetime of your application; you don't have to create and destroy them very often.
If you think there is excess unnecessary output going to derby.log, you might go log an enhancement request at the Derby community bug tracker with examples, to ensure that future versions of Derby don't log unneeded stuff.

Categories

Resources