I'm using elasticsearch 6.x version with ingest plugin to let me query inside document.
I managed to insert record with attachment document and I'm able to query it against various fields.
When I query the content of the file I'm doing this:
boolQuery.filter(new MatchPhrasePrefixQueryBuilder("attachment.content", "St. Anna Church"))
It works, but I want now to make query with this field: "Church Wall People" where basically it's not a complete phrase, I want back all the documents that contain the words Church, Wall and People.
I have a requirement to read the value form a PDF file and save the result in a db.
I have converted Pdf to text .
Now the text data looks like this:
Test Name Results Units Bio. Ref. Interval
LIPID PROFILE, BASIC, SERUM
Cholesterol Total 166.00 mg/dL <200.00
Triglycerides 118.00 mg/dL <150.00
My requirement is to read the table data from the Pdf file and save in the MySQL database as it is.
use java io to read the text file and jdbc to safe the information in the mysql via sql.
I have a 2 tables, CONFIGURATION_INFO and CONFIGURATION_FILE. I use the below query to find out all employee files
select i.cfg_id, Filecontent
from CONFIGURATION_INFO i,
CONFIGURATION_FILE f
where i.cfg_id=f.cfg_id
but I also need to parse or extract data from the blob column Filecontent and display all cfg_id whose xml tag PCVERSION starts with 8. Is there any way?
XML tag that needs to be extracted is <CSMCLIENT><COMPONENT><PCVERSION>8.1</PCVERSION></COMPONENT></CSMCLIENT>
XML
It need not be any query, even if it is a java or groovy code, it would help me.
Note: Some of the XMLs might be as big as 5MB.
So basically the data from the table CONFIGURATION_INFO, for the column Filecontent is BLOB?
So the syntax to query out the XML from the BLOB Content from database is using this function XMLType.
This function is converting the datatype of your column from BLOB to XMLType. Then parsing it using XPath function.
Oracle Database
select
xmltype(Filecontent, 871).extract('//CSMCLIENT/COMPONENT/PCVERSION/text()').getstringval()
from CONFIGURATION_INFO ...
do the rest of WHERE logic on your own.
Usally you know what the data in the BLOB column, so you can parse in the SQL query..
If it is a text column (varchar or something like that) you can use to_char(coloumName).
There are a lot of functions that you can use you can find them in this link
Usually you will use to_char/to_date/hexToRow/rowTohex
convert blob to file link
I want to write a sparql query to get rdf data based on their id. I am trying with
SELECT ?ID ?NAME WHERE {?ID = "something" }
but does not return the expecting results. Does anyone knows which is my mistake?
Actually rdf:id is the resource URI itself. You can utilise a SPARQL FILTER clause for filtering your result, or you can directly insert the URI in the WHERE clause of your query, e.g.
<myURI> ex:name ?name .
In order to have a precise answer you should share a small fragment of your RDF data (possibly in Turtle format, human friendly).
Is there a open source file based (NOT in-memory based) JDBC driver for CSV files? My CSV are dynamically generated from the UI according to the user selections and each user will have a different CSV file. I'm doing this to reduce database hits, since the information is contained in the CSV file. I only need to perform SELECT operations.
HSQLDB allows for indexed searches if we specify an index, but I won't be able to provide an unique column that can be used as an index, hence it does SQL operations in memory.
Edit:
I've tried CSVJDBC but that doesn't support simple operations like order by and group by. It is still unclear whether it reads from file or loads into memory.
I've tried xlSQL, but that again relies on HSQLDB and only works with Excel and not CSV. Plus its not in development or support anymore.
H2, but that only reads CSV. Doesn't support SQL.
You can solve this problem using the H2 database.
The following groovy script demonstrates:
Loading data into the database
Running a "GROUP BY" and "ORDER BY" sql query
Note: H2 supports in-memory databases, so you have the choice of persisting the data or not.
// Create the database
def sql = Sql.newInstance("jdbc:h2:db/csv", "user", "pass", "org.h2.Driver")
// Load CSV file
sql.execute("CREATE TABLE data (id INT PRIMARY KEY, message VARCHAR(255), score INT) AS SELECT * FROM CSVREAD('data.csv')")
// Print results
def result = sql.firstRow("SELECT message, score, count(*) FROM data GROUP BY message, score ORDER BY score")
assert result[0] == "hello world"
assert result[1] == 0
assert result[2] == 5
// Cleanup
sql.close()
Sample CSV data:
0,hello world,0
1,hello world,1
2,hello world,0
3,hello world,1
4,hello world,0
5,hello world,1
6,hello world,0
7,hello world,1
8,hello world,0
9,hello world,1
10,hello world,0
If you check the sourceforge project csvjdbc please report your expierences. the documentation says it is useful for importing CSV files.
Project page
This was discussed on Superuser https://superuser.com/questions/7169/querying-a-csv-file.
You can use the Text Tables feature of hsqldb: http://hsqldb.org/doc/2.0/guide/texttables-chapt.html
csvsql/gcsvsql are also possible solutions (but there is no JDBC driver, you will have to run a command line program for your query).
sqlite is another solution but you have to import the CSV file into a database before you can query it.
Alternatively, there is commercial software such as http://www.csv-jdbc.com/ which will do what you want.
To do anything with a file you have to load it into memory at some point. What you could do is just open the file and read it line by line, discarding the previous line as you read in a new one. Only downside to this approach is its linearity. Have you thought about using something like memcache on a server where you use Key-Value stores in memory you can query instead of dumping to a CSV file?
You can use either specialized JDBC driver, like CsvJdbc (http://csvjdbc.sourceforge.net) or you may chose to configure a database engine such as mySQL to treat your CSV as a table and then manipulate your CSV through standard JDBC driver.
The trade-off here - available SQL features vs performance.
Direct access to CSV via CsvJdbc (or similar) will allow you very quick operations on big data volumes, but without capabilities to sort or group records using SQL commands ;
mySQL CSV engine can provide rich set of SQL features, but with the cost of performance.
So if the size of your table is relatively small - go with mySQL. However if you need to process big files (> 100Mb) without need for grouping or sorting - go with CsvJdbc.
If you need both - handle very bif files and be able to manipulate them using SQL, then optimal course of action - to load the CSV into normal database table (e.g. mySQL) first and then handle the data as usual SQL table.