I am new to Jackcess (downloaded it today, version 4.0.4) and immediately running into troubles: Would anybody know, why db.getTable(aName) returns null whereas db.getTableNames() shows me that very aName among others?
Note that I am running it jointly with Apache Commons Lang 3.12.0 because I could not find Apache Commons Lang 3.10 as requested in the dependencies of Jackcess 4.0.4. But would this explain the behavior?
In the code below, "dbfile" and "tble" should still be defined according to your database. Unfortunately I cannot release my data base as it is proprietary. I am getting null from db.getTable(aName) no matter what the OPTION is. Obviously, any the code with OPTION!=1 is a work-around to find out whether the corresponding table name is within the database. When I run the code with OPTION=0, the output is:
That is it: [my table name]
Your table is null.
I would appreciate if you could share your ideas so I can make this example work.
import java.io.IOException;
import java.io.File;
import java.util.Set;
import com.healthmarketscience.jackcess.Database;
import com.healthmarketscience.jackcess.DatabaseBuilder;
import com.healthmarketscience.jackcess.Table;
public class JackcessTrial {
private static final int OPTION = 0;
public JackcessTrial() {
super();
}
public void openSourceTable(File dbFile, String tbleName) {
Database db = null;
Table myTable = null;
try {
db = new DatabaseBuilder(dbFile).setReadOnly(true).open();
if (db==null) {
System.out.println("No database found.");
return;
}
if (OPTION==1) {
myTable = db.getTable(tbleName);
} else {
Set<String> names = db.getTableNames();
for(String name : names) {
if (name.equals(tbleName)) {
System.out.println("That is it: "+name);
myTable = db.getTable(name);
break;
}
}
}
if (myTable == null) {
System.out.println("Your table is null.");
db.close();
return;
}
System.out.println("Got your table!");
db.close();
} catch(Exception e) {
e.printStackTrace();
db = null;
}
}
public static void main(String args[]) throws IOException {
File dbfile = ...;
String tble = ...;
JackcessTrial test = new JackcessTrial();
test.openSourceTable(dbfile, tble);
}
}
Related
This post has a class that can output a String with the geolocation of an address:
https://hoffa.medium.com/free-ip-address-geolocation-with-maxmind-and-snowflake-676e74ea80b8
The relevant part is:
public String x(String ip) throws Exception {
CityResponse r = _reader.city(InetAddress.getByName(ip));
return r.getCity().getName() + ", " + r.getMostSpecificSubdivision().getIsoCode() + ", "+ r.getCountry().getIsoCode();
}
But I want to return a variant instead with all the information. How can I do that?
You can ask the UDF to return variant and within the Java code just return a JSON string, as in:
CityResponse r = _reader.city(InetAddress.getByName(ip));
return r.toJson();
My teammate Steven Maser wrote this solution, thanks to it you can ask the UDF for all the details and parse the results as needed in SQL, as in:
select geoip2_all('156.33.241.5');
select geoip2_all('156.33.241.5'):city:names:en::varchar;
Full UDF code:
create or replace function geoip2_all(ip String)
returns variant
language java
handler = 'X.x'
imports = ('#fh_jars/geoip2-4.0.0.jar'
, '#fh_jars/maxmind-db-3.0.0.jar'
, '#fh_jars/jackson-annotations-2.14.1.jar'
, '#fh_jars/jackson-core-2.14.1.jar'
, '#fh_jars/jackson-databind-2.14.1.jar')
as $$
import java.io.File;
import java.net.InetAddress;
import com.snowflake.snowpark_java.types.SnowflakeFile;
import com.maxmind.geoip2.model.*;
import com.maxmind.geoip2.DatabaseReader;
import com.maxmind.geoip2.exception.AddressNotFoundException;
class X {
DatabaseReader _reader;
public String x(String ip) throws Exception {
if (null == _reader) {
// lazy initialization
_reader = new DatabaseReader.Builder(SnowflakeFile.newInstance("#fh_jars/GeoLite2-City.mmdb").getInputStream()).build();
}
try {
CityResponse r = _reader.city(InetAddress.getByName(ip));
return r.toJson();
} catch (AddressNotFoundException e) {
return null;
}
}
}
$$;
The below code(not mine) is supposed to check the connection status of a zookeeper by using the znode_exists() method. I want to use it at a producerAPI before a message publish. But i am getting an error at defining the ZooKeeperConnection object. I have added the possible classes and libraries.
import java.io.IOException;
//import org.apache.zookeeper.*; adding this doesn't help
import org.apache.zookeeper.ZooKeeperMain;
import org.apache.zookeeper.KeeperException;
import org.apache.zookeeper.WatchedEvent;
import org.apache.zookeeper.Watcher;
import org.apache.zookeeper.Watcher.Event.KeeperState;
import org.apache.zookeeper.data.Stat;
public class ZKExists {
private static ZooKeeper zk;
private static ZooKeeperConnection conn; // Object cannot be resolve to a type
// Method to check existence of znode and its status, if znode is available.
public static Stat znode_exists(String path) throws
KeeperException,InterruptedException {
return zk.exists(path, true);
}
public static void main(String[] args) throws InterruptedException,KeeperException {
String path = "/Znode_path"; // Assign znode to the specified path
try {
ZooKeeperConnection conn = new ZooKeeperConnection();
zk = conn.connect("localhost");
Stat stat = znode_exists(path); // Stat checks the path of the znode
if(stat != null) {
System.out.println("Node exists and the node version is " +
stat.getVersion());
} else {
System.out.println("Node does not exists");
}
} catch(Exception e) {
System.out.println(e.getMessage()); // Catches error messages
}
}
}
Which library am i missing to make the object valid ?
Is it possible to determine, what client libs have been loaded prior to a component?
We are running multiple site backed by different Javascript frameworks. In order to run a single component across the board, it's not sufficient to just use
<cq:includeClientLib categories="blah"/>
We need to identify the respective framework (i.e. AngularJS, Vanilla, jQuery, blah) in order to facilitate the integration.
We are looking for a decent server side solution.
I haven't actually done this, but it would presumably be possible if you are buffering your output to clone the JspWriter buffer or examine it to see what it already contains. That sounds ugly to me, though. But this is decompiled code for how the cq:includeClientLib tag adds libraries to the output, which may show you how you can read back what was previously written:
package com.adobe.granite.ui.tags;
import com.day.cq.widget.HtmlLibraryManager;
import java.io.IOException;
import javax.servlet.ServletRequest;
import javax.servlet.jsp.JspException;
import javax.servlet.jsp.JspWriter;
import javax.servlet.jsp.PageContext;
import javax.servlet.jsp.tagext.TagSupport;
import org.apache.sling.api.SlingHttpServletRequest;
import org.apache.sling.api.scripting.SlingBindings;
import org.apache.sling.scripting.jsp.util.TagUtil;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
public class IncludeClientLibraryTag extends TagSupport {
private static final long serialVersionUID = -3068291967085012331L;
private static final Logger log = LoggerFactory.getLogger(IncludeClientLibraryTag.class);
private String categories;
private String js;
private String css;
private String theme;
private Boolean themed;
public IncludeClientLibraryTag() {
}
public void setPageContext(PageContext pageContext) {
super.setPageContext(pageContext);
this.categories = null;
this.js = null;
this.css = null;
this.theme = null;
this.themed = null;
}
public void setCategories(String categories) {
this.categories = categories;
}
public void setJs(String js) {
this.js = js;
}
public void setCss(String css) {
this.css = css;
}
public void setTheme(String theme) {
this.theme = theme;
}
public void setThemed(boolean themed) {
this.themed = Boolean.valueOf(themed);
}
public int doEndTag() throws JspException {
SlingHttpServletRequest request = TagUtil.getRequest(this.pageContext);
HtmlLibraryManager libManager = this.getHtmlLibraryManager(request);
if(libManager == null) {
log.warn("<ui:includeClientLib>: Could not retrieve HtmlLibraryManager service, skipping inclusion.");
return 6;
} else {
JspWriter out = this.pageContext.getOut();
try {
if(this.categories != null) {
libManager.writeIncludes(request, out, toArray(this.categories));
} else if(this.theme != null) {
libManager.writeThemeInclude(request, out, toArray(this.theme));
} else if(this.js != null) {
if(this.themed != null) {
libManager.writeJsInclude(request, out, this.themed.booleanValue(), toArray(this.js));
} else {
libManager.writeJsInclude(request, out, toArray(this.js));
}
} else if(this.css != null) {
if(this.themed != null) {
libManager.writeCssInclude(request, out, this.themed.booleanValue(), toArray(this.css));
} else {
libManager.writeCssInclude(request, out, toArray(this.css));
}
}
return 6;
} catch (IOException var6) {
String libs = this.categories != null?"categories: " + this.categories:(this.theme != null?"theme: " + this.theme:(this.js != null?"js: " + this.js:(this.css != null?"css: " + this.css:"")));
throw new JspException("Could not include client library: " + libs, var6);
}
}
}
private HtmlLibraryManager getHtmlLibraryManager(ServletRequest request) {
SlingBindings bindings = (SlingBindings)request.getAttribute(SlingBindings.class.getName());
return (HtmlLibraryManager)bindings.getSling().getService(HtmlLibraryManager.class);
}
private static String[] toArray(String commaSeparatedList) {
if(commaSeparatedList == null) {
return new String[0];
} else {
String[] split = commaSeparatedList.split(",");
for(int i = 0; i < split.length; ++i) {
split[i] = split[i].trim();
}
return split;
}
}
}
I think the best solution may be to use the client library dependencies or embed attributes in your library, though, or let the client-side JavaScript test if a library is present (ex. test if the jQuery object is undefined) and then take appropriate action. In other words, let the client side determine the final rendering based on what libraries exist on in the client. It sounds like this may not be possible for your situation, though.
dependencies: This is a list of other client library categories on
which this library folder depends. For example, given two
cq:ClientLibraryFolder nodes F and G, if a file in F requires another
file in G in order to function properly, then at least one of the
categories of G should be among the dependencies of F.
embed: Used to > embed code from other libraries. If node F embeds nodes G and H, the
resulting HTML will be a concetration of content from nodes G and H.
I'm trying to extract the Date info from a picture. I'm getting along quite good but I have this problem bugging me for 2 days. I've even rewriten the entire code once and still get it.
I obviously get an NP because I return a null in the method grabExifSubIFDDirectory. This is the main problem, it claims there is no Directory available while there should be one. Why can't it grab the directory? I'm using standrd jpegs and other formats.
The jar is placed inside a folder with pictures.
If somebody could point (hehe) me int the direction?
Package utils:
package utils;
import com.drew.imaging.ImageMetadataReader;
import com.drew.imaging.ImageProcessingException;
import com.drew.metadata.Metadata;
import com.drew.metadata.exif.ExifSubIFDDirectory;
import java.io.File;
import java.io.IOException;
import java.util.ArrayList;
import java.util.Date;
import javax.swing.JOptionPane;
public class FileUtils {
private ArrayList<String> fileNamesList;
public FileUtils(String jarFilePath) {
setFileNames(jarFilePath);
}
// Retrieves a Metadata object from a File object and returns it.
public Metadata grabFileMetaData(String filePath) {
Metadata metadata = null;
File file = new File(filePath);
try {
metadata = ImageMetadataReader.readMetadata(file);
} catch (ImageProcessingException e) {
JOptionPane.showMessageDialog(null, "Error: " + e);
} catch (IOException e) {
JOptionPane.showMessageDialog(null, "Error: " + e);
}
return metadata;
}
// Retrieves a ExifSubIFDDirectory object from a Metadata object and returns it.
public ExifSubIFDDirectory grabExifSubIFDDirectory(Metadata metadata, String filePath) {
ExifSubIFDDirectory directory;
if (metadata.containsDirectory(ExifSubIFDDirectory.class)) {
directory = (ExifSubIFDDirectory) metadata.getDirectory(ExifSubIFDDirectory.class);
return directory;
} else {
JOptionPane.showMessageDialog(null, "File at: " + filePath + " does not contain exif date.");
return null;
}
}
// Retrieves a Date object from a ExifSubIFDDirectory object and returns it.
public Date grabDate(ExifSubIFDDirectory directory) {
Date date;
date = directory.getDate(ExifSubIFDDirectory.TAG_DATETIME_ORIGINAL);
return date;
}
// Return the actual Date object using the above methods.
public Date getDate(String filePath) {
return grabDate(grabExifSubIFDDirectory(grabFileMetaData(filePath), filePath));
}
// Retrieves the names of the files in the same folder as the executed jar.
// Saves them in a variable.
public void setFileNames(String jarPath) {
ArrayList<String> temp = new ArrayList();
String path = jarPath;
String files;
File folder = new File(path);
File[] listOfFiles = folder.listFiles();
for (File listOfFile : listOfFiles) {
if (listOfFile.isFile()) {
files = listOfFile.getName();
if (!"PhotoRenamer.jar".equals(files) && !"Thumbs.db".equals(files)) {
temp.add(files);
}
}
}
this.fileNamesList = temp;
}
// getter
public ArrayList<String> getFileNamesList() {
return fileNamesList;
}
}
Package domein:
package domein;
import utils.FileUtils;
import utils.JarUtils;
import java.util.ArrayList;
public class DomeinController {
FileUtils fileUtils;
JarUtils jarUtils;
public DomeinController() {
this.jarUtils = new JarUtils();
this.fileUtils = new FileUtils(jarUtils.getJarPath());
}
public ArrayList<String> getFileNamesList() {
return fileUtils.getFileNamesList();
}
public String getJarPath() {
return jarUtils.getJarPath();
}
// Retrieve string from Date object of the file with the number i.
public String getDate(int i) {
return fileUtils.getDate(createFilePath(i)).toString();
}
public String createFilePath(int i) {
return getJarPath() + "\\" + fileUtils.getFileNamesList().get(i);
}
}
Package startup:
package startup;
import domein.DomeinController;
import java.net.URISyntaxException;
import javax.swing.JOptionPane;
public class Main {
public static void main(String[] args) throws URISyntaxException {
DomeinController dc = new DomeinController();
// print out jar path
JOptionPane.showMessageDialog(null,dc.getJarPath());
// print out file names in folder
String lijstje = "";
for (int i=0;i<dc.getFileNamesList().size();i++){
lijstje += dc.getFileNamesList().get(i);
}
JOptionPane.showMessageDialog(null,lijstje);
JOptionPane.showMessageDialog(null,dc.getDate(1));
}
}
The getDate(String filePath) method is working just fine, I tested it.
So it must come from the picture itself
Test your code with a single picture (with a unit test), make sure the getDate method is working for you. Do the test with a picture coming from the Internet too.
As Metadata is not null, the file does exist so the picture has no EXIF information, there is no possible doubt.
Use this tool to check if your pictures have EXIF information
What you have to realize is that not all images contain that EXIF information. For any number of reasons, the metadata can be scrubbed from an image.
I was going through a similar issue earlier today, and this was my solution:
public HashMap<String, String> getMetadata(File photoFile){
Metadata metadata;
ExifSubIFDDirectory exifSubIFDDirectory;
HashMap<String, String> tagMap = new HashMap<>();
try {
metadata = ImageMetadataReader.readMetadata(photoFile);
exifSubIFDDirectory = metadata.getDirectory(ExifSubIFDDirectory.class);
ExifSubIFDDescriptor exifSubIFDDescriptor = new ExifSubIFDDescriptor(exifSubIFDDirectory);
if (exifSubIFDDirectory != null) {
tagMap.put("lens", exifSubIFDDirectory.getDescription(ExifSubIFDDirectory.TAG_LENS_MODEL).toString());
tagMap.put("captureDate", exifSubIFDDirectory.getDate(ExifSubIFDDirectory.TAG_DATETIME_ORIGINAL).toString());
tagMap.put("shutter", exifSubIFDDescriptor.getExposureTimeDescription());
tagMap.put("aperture", exifSubIFDDescriptor.getFNumberDescription());
tagMap.put("focalLength", exifSubIFDDescriptor.getFocalLengthDescription());
tagMap.put("iso", exifSubIFDDescriptor.getIsoEquivalentDescription());
tagMap.put("meterMode", exifSubIFDDescriptor.getMeteringModeDescription());
//null is a possible return value from the method calls above. Replace them
//with default no value string
for (String key : tagMap.keySet()){
if (tagMap.get(key) == null)
tagMap.put(key, "No Value Recorded");
}
} else {
Date currentDate = new Date();
tagMap.put("captureDate", currentDate.toString());
tagMap.put("shutter", "No Value Recorded");
tagMap.put("aperture", "No Value Recorded");
tagMap.put("focalLength", "No Value Recorded");
tagMap.put("iso", "No Value Recorded");
tagMap.put("meterMode","No Value Recorded");
tagMap.put("lens", "No Value Recorded");
}
} catch (ImageProcessingException|IOException|NullPointerException e) {
//unhandled exception, put out logging statement
log.error("Error processing metadata for file " + photoFile.getName() + "\n" + e.getStackTrace());
}
return tagMap;
}
As you can see, I check if the ExifSubIFDDirectory object exists before doing any work. If it does, then the metadata fields I want are saved in a HashMap object, which is later persisted in the database. If the tag doesn't exist, it is stored as a null value, which is later replaced with a 'No Value Found' string.
Essentially, your issue here was not checking if ExifSubIFDDirectory object is initialized via the metadata.getDirectory call. I think if you use an image with the metadata set, you wouldn't encounter this issue during testing.
(This is a general advice from the image codec perspective. It may or may not be applicable to the users of the open source library by Drew Noakes.)
My first step is to use Phil Harvey's ExifTool to dump the metadata from the JPEG file in a rather exhaustive way.
Once you are sure that the JPEG file contains EXIF data, what follows is troubleshooting effort to find out why the library does not return that part of data. It might be properly parsing it, but perhaps your code didn't retrieve it from its API in the way it expects.
(Since I don't know this open-source library, I cannot give any advice specific to this library.)
Check whether the JPEG file contains an APP1 segment. The APP1 segment is marked by a two-byte sequence 0xFF 0xE1.
Different image libraries provide different ways of finding out whether an APP1 segment is present. Other libraries may skip over, ignore, or consume and hide this segment from the API user.
If your library allows it, install a metadata header event handler for APP1 so that you can see what processing is performed on its data. You can then track down how the library intends to store and provide that data via its API.
http://www.digitalpreservation.gov/formats/fdd/fdd000147.shtml
http://en.wikipedia.org/wiki/JPEG
The is actually related to the question How can I add row numbers for rows in PIG or HIVE?
The 3rd answer provided by srini works fine, but I have trouble to access the data after the udf.
The udf provided by srini is following
import java.io.IOException;
import java.util.Iterator;
import org.apache.pig.EvalFunc;
import org.apache.pig.backend.executionengine.ExecException;
import org.apache.pig.data.BagFactory;
import org.apache.pig.data.DataBag;
import org.apache.pig.data.Tuple;
import org.apache.pig.data.TupleFactory;
import org.apache.pig.impl.logicalLayer.schema.Schema;
import org.apache.pig.data.DataType;
public class RowCounter extends EvalFunc<DataBag> {
TupleFactory mTupleFactory = TupleFactory.getInstance();
BagFactory mBagFactory = BagFactory.getInstance();
public DataBag exec(Tuple input) throws IOException {
try {
DataBag output = mBagFactory.newDefaultBag();
DataBag bg = (DataBag)input.get(0);
Iterator it = bg.iterator();
Integer count = new Integer(1);
while(it.hasNext())
{ Tuple t = (Tuple)it.next();
t.append(count);
output.add(t);
count = count + 1;
}
return output;
} catch (ExecException ee) {
// error handling goes here
throw ee;
}
}
public Schema outputSchema(Schema input) {
try{
Schema bagSchema = new Schema();
bagSchema.add(new Schema.FieldSchema("RowCounter", DataType.BAG));
return new Schema(new Schema.FieldSchema(getSchemaName(this.getClass().getName().toLowerCase(), input),
bagSchema, DataType.BAG));
}catch (Exception e){
return null;
}
}
}
I wrote a simple test pig script as following
A = load 'input.txt' using PigStorage(' ') as (name:chararray, age:int);
/*
--A: {name: chararray,age: int}
(amy,56)
(bob,1)
(bob,9)
(amy,34)
(bob,20)
(amy,78)
*/
B = group A by name;
C = foreach B {
orderedGroup = order A by age;
generate myudfs.RowCounter(orderedGroup) as t;
}
/*
--C: {t: {(RowCounter: {})}}
({(amy,34,1),(amy,56,2),(amy,78,3)})
({(bob,1,1),(bob,9,2),(bob,20,3)})
*/
D = foreach C generate FLATTEN(t);
/*
D: {t::RowCounter: {}}
(amy,34,1)
(amy,56,2)
(amy,78,3)
(bob,1,1)
(bob,9,2)
(bob,20,3)
*/
The problem is how to use D in later operation. I tried multiple ways, but always got the following error
ava.lang.ClassCastException: java.lang.String cannot be cast to org.apache.pig.data.DataBag
at org.apache.pig.backend.hadoop.executionengine.physicalLayer.expressionOperators.POProject.processInputBag(POProject.java:575)
at org.apache.pig.backend.hadoop.executionengine.physicalLayer.expressionOperators.POProject.getNext(POProject.java:248)
at org.apache.pig.backend.hadoop.executionengine.physicalLayer.PhysicalOperator.getNext(PhysicalOperator.java:316)
at org.apache.pig.backend.hadoop.executionengine.physicalLayer.relationalOperators.POForEach.processPlan(POForEach.java:332)
at org.apache.pig.backend.hadoop.executionengine.physicalLayer.relationalOperators.POForEach.getNext(POForEach.java:284)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapReduce$Reduce.runPipeline(PigGenericMapReduce.java:459)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapReduce$Reduce.processOnePackageOutput(PigGenericMapReduce.java:427)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapReduce$Reduce.reduce(PigGenericMapReduce.java:407)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapReduce$Reduce.reduce(PigGenericMapReduce.java:261)
at org.apache.hadoop.mapreduce.Reducer.run(Reducer.java:176)
at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:572)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:414)
at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:256)
My guess is that because we don't have the schema for the tuple inside the bag. if this is the reason, how should I modify the udf?
ok, I found the solution by adding the outputSchema as following
public Schema outputSchema(Schema input) {
try{
Schema.FieldSchema counter = new Schema.FieldSchema("counter", DataType.INTEGER);
Schema tupleSchema = new Schema(input.getField(0).schema.getField(0).schema.getFields());
tupleSchema.add(counter);
Schema.FieldSchema tupleFs;
tupleFs = new Schema.FieldSchema("with_counter", tupleSchema, DataType.TUPLE);
Schema bagSchema = new Schema(tupleFs);
return new Schema(new Schema.FieldSchema("row_counter",
bagSchema, DataType.BAG));
}catch (Exception e){
return null;
}
}
}
Thanks.