I am a beginner at all things coding but need some help with Fusion Charts if anyone can help.
I have followed along with tutorials already for Fusion Charts linking it to MySQL database and displaying a chart with no issues.
However, I would like to display a time-series chart, which uses FusionTime. This requires the data to be in a Datatable. " FusionTime accepts data in rows and columns as a Datatable".
I cannot find any examples online for taking SQL data and converting into a datatable with data and schema which it seems to require. This is different from the way fusioncharts works.
https://www.fusioncharts.com/dev/fusiontime/getting-started/create-your-first-chart-in-fusiontime
My SQL database contains many tables and many columns within it, so will need to select the appropriate column to display.
I would appreciate any advice anyone can provide. The main problem is I don't know how to get the SQL database into a data and schema file to display with fusiontime. This is to display on a webpage hosted locally.
Many thanks for any time you can provide to help with this
ft needs a json, you must write a file from php type json
like this
.....
$result = json_encode($dataIngGast, JSON_UNESCAPED_SLASHES | JSON_UNESCAPED_UNICODE |JSON_NUMERIC_CHECK | JSON_PRETTY_PRINT);
//echo $result;
$arquivo = "column-line-combination-data-gasto-ingreso-finanzas.json";
$fp = fopen($arquivo, "a+");
fwrite($fp, $result);
fclose($fp);
Arabic data is converting into ???? when java program queries xml payload from Oracle Table using Select statement
I have written a JDBC program to query xml type payload from Oracle table using Select statement. Few XML elements in the payload contains like FirstName, LastName etc. contains Arabic Characters. When i run my program, Select query returning the xml payload but the elements which having arabic characters are converting into ????.
I am not sure why it is happening like this.
is any one have solution for this problem?
Thanks in Advance.
I experienced this problem with java and mysql on eclipse the solution was
From eclipse click right on your project and choose properties and choose utf-8 like this photo
Then from the database chose base encoding and utf-8 tables.
Finally, all database queries must be utf-8 encoded
like this
String url = "jdbc:mysql://host/database?useUnicode=true&characterEncoding=utf8";
the SQLite history file is stored in C:...\AppData\Local\Google\Chrome\User Data\Default\History. How can I retrieve only two field of this database: visits.visits_time (converted to a Date) and urls.url in Java, in order to have an output like that: https://example.com/ 2018-10-25 08:42:27?
Since it's a SQLite file you're probably going to have to use https://docs.oracle.com/javase/8/docs/technotes/guides/jdbc/ for querie purposes, then you can do whatever you want with it
For now I have a CSV with several columns in rows. Eventually, I will have a SQL relational database structure. I was wondering if there are any libraries to easily extract this data into a list of java objects.
Example:
title | location | date
EventA | los angeles, ca | 05-29-2014
EventB | New York, NY | 08-23-2013
This is the structure of the data in csv. I would have a java object called Event:
Event(String title, String location, String Date)
I am aware of openCSV. Is that would I need to use for csv? If that is the case, what is the different solution for a SQL relational database?
Also, does can reading a csv only be done in the main method?
For when you convert to the SQL database, you can use Apache's dbutils for a low-level solution, or Hibernate for a high-level solution.
dbutils
You can implement a ResultSetHandler to convert a result set into an object or if its a POJO the framework can convert it for you. There are examples on the apache site.
http://commons.apache.org/proper/commons-dbutils/
Hibernate
There are plenty of tutorials out there for working with Hibernate.
http://www.hibernate.org/
Try JSefa, which allows you to annotate Java classes that can be used in a serialization and de-serialization process.
From the tutorial:
The annotations for CSV are similar to the XML ones.
#CsvDataType()
public class Person {
#CsvField(pos = 1)
String name;
#CsvField(pos = 2, format = "dd.MM.yyyy")
Date birthDate;
}
Serialization
Serializer serializer = CsvIOFactory.createFactory(Person.class).createSerializer();
This time we used the super interface Serializer, so that we can abstract from the choosen format type (XML, CSV, FLR) in the following code.
The next should be no surprise:
serializer.open(writer);
// call serializer.write for every object to serialize
serializer.close(true);
The result
Erwin Schmidt;23.05.1964
Thomas Stumm;12.03.1979
Is there a open source file based (NOT in-memory based) JDBC driver for CSV files? My CSV are dynamically generated from the UI according to the user selections and each user will have a different CSV file. I'm doing this to reduce database hits, since the information is contained in the CSV file. I only need to perform SELECT operations.
HSQLDB allows for indexed searches if we specify an index, but I won't be able to provide an unique column that can be used as an index, hence it does SQL operations in memory.
Edit:
I've tried CSVJDBC but that doesn't support simple operations like order by and group by. It is still unclear whether it reads from file or loads into memory.
I've tried xlSQL, but that again relies on HSQLDB and only works with Excel and not CSV. Plus its not in development or support anymore.
H2, but that only reads CSV. Doesn't support SQL.
You can solve this problem using the H2 database.
The following groovy script demonstrates:
Loading data into the database
Running a "GROUP BY" and "ORDER BY" sql query
Note: H2 supports in-memory databases, so you have the choice of persisting the data or not.
// Create the database
def sql = Sql.newInstance("jdbc:h2:db/csv", "user", "pass", "org.h2.Driver")
// Load CSV file
sql.execute("CREATE TABLE data (id INT PRIMARY KEY, message VARCHAR(255), score INT) AS SELECT * FROM CSVREAD('data.csv')")
// Print results
def result = sql.firstRow("SELECT message, score, count(*) FROM data GROUP BY message, score ORDER BY score")
assert result[0] == "hello world"
assert result[1] == 0
assert result[2] == 5
// Cleanup
sql.close()
Sample CSV data:
0,hello world,0
1,hello world,1
2,hello world,0
3,hello world,1
4,hello world,0
5,hello world,1
6,hello world,0
7,hello world,1
8,hello world,0
9,hello world,1
10,hello world,0
If you check the sourceforge project csvjdbc please report your expierences. the documentation says it is useful for importing CSV files.
Project page
This was discussed on Superuser https://superuser.com/questions/7169/querying-a-csv-file.
You can use the Text Tables feature of hsqldb: http://hsqldb.org/doc/2.0/guide/texttables-chapt.html
csvsql/gcsvsql are also possible solutions (but there is no JDBC driver, you will have to run a command line program for your query).
sqlite is another solution but you have to import the CSV file into a database before you can query it.
Alternatively, there is commercial software such as http://www.csv-jdbc.com/ which will do what you want.
To do anything with a file you have to load it into memory at some point. What you could do is just open the file and read it line by line, discarding the previous line as you read in a new one. Only downside to this approach is its linearity. Have you thought about using something like memcache on a server where you use Key-Value stores in memory you can query instead of dumping to a CSV file?
You can use either specialized JDBC driver, like CsvJdbc (http://csvjdbc.sourceforge.net) or you may chose to configure a database engine such as mySQL to treat your CSV as a table and then manipulate your CSV through standard JDBC driver.
The trade-off here - available SQL features vs performance.
Direct access to CSV via CsvJdbc (or similar) will allow you very quick operations on big data volumes, but without capabilities to sort or group records using SQL commands ;
mySQL CSV engine can provide rich set of SQL features, but with the cost of performance.
So if the size of your table is relatively small - go with mySQL. However if you need to process big files (> 100Mb) without need for grouping or sorting - go with CsvJdbc.
If you need both - handle very bif files and be able to manipulate them using SQL, then optimal course of action - to load the CSV into normal database table (e.g. mySQL) first and then handle the data as usual SQL table.