I want to monitor child activity while using the internet. my app is in java. I want URLs to be saved in a text file. can you give me a hint for this? how to store URLs from all open tabs in a Google Chrome (or other browsers like Mozilla Firefox) into a file using java?
The code I am currently using is as under. what it does is it stores last 20 URLs. it is not working in real-time. can you plz update SQL query that takes URLs of present date or anyone of you knows the database structure of chrome history file.
Class.forName ("org.sqlite.JDBC");
connection = DriverManager.getConnection("jdbc:sqlite:C:\\Users\\abc\\AppData\\Local\\Google\\Chrome\\UserData\\Default\\History");
statement = connection.createStatement ();
resultSet = statement.executeQuery ("SELECT * FROM urls ORDER BY ID DESC LIMIT 20 ");
//SELECT * FROM urls WHERE id=last
//url >= CURDATE()-3
while (resultSet.next ())
{
System.out.println ("URL [" + resultSet.getString ("url") + "]" + ", visit count [" + resultSet.getString "visit_count") + "]");
}
SELECT url,count(*) as url_cnt FROM urls GROUP BY url_cnt
As per title, I'm current using JDBC on eclipse to connect to my PostgreSQL.
I have been running EXPLAIN ANALYZE statements to retrieve query plans from postgres itself. However, is it possible to store these query plan in a structure that resemble a tree? e.g main branch and sub branch etc. I read somewhere that it is a good idea to store it into xml document first and manipulate it from there.
Is there an API in Java for me to achieve this? Thanks!
try using format xml eg
t=# explain (analyze, format xml) select * from pg_database join pg_class on true;
QUERY PLAN
----------------------------------------------------------------
<explain xmlns="http://www.postgresql.org/2009/explain"> +
<Query> +
<Plan> +
<Node-Type>Nested Loop</Node-Type> +
<Join-Type>Inner</Join-Type> +
<Startup-Cost>0.00</Startup-Cost> +
<Total-Cost>23.66</Total-Cost> +
<Plan-Rows>722</Plan-Rows> +
<Plan-Width>457</Plan-Width> +
<Actual-Startup-Time>0.026</Actual-Startup-Time> +
<Actual-Total-Time>3.275</Actual-Total-Time> +
<Actual-Rows>5236</Actual-Rows> +
<Actual-Loops>1</Actual-Loops> +
<Plans> +
<Plan> +
<Node-Type>Seq Scan</Node-Type> +
<Parent-Relationship>Outer</Parent-Relationship> +
...and so on
I have a CSV file in which their is a multiline data. Important thing to note in that file is that, each record ends with CRLF character, and multi line incomplete record ends with LF. Now if I use below .ctl file for SQL Loader, the records load successfully. Below is the CSV file snapshot(correct file.jpg) and the ctl file.
ctl File :
OPTIONS (
ERRORS=1,
SKIP=1
)
LOAD DATA
CHARACTERSET 'UTF8'
INFILE 'C:\Users\puspender.tanwar\Desktop\try_SQLLDR\csvfile.csv'
BADFILE 'C:\Users\puspender.tanwar\Desktop\try_SQLLDR\badfile.bad'
DISCARDFILE 'C:\Users\puspender.tanwar\Desktop\try_SQLLDR\DSCfile.dsc'
CONTINUEIF LAST != '"'
INTO TABLE ODI_DEV_TARGET.CASE
FIELDS TERMINATED BY "," OPTIONALLY ENCLOSED BY '"'
TRAILING NULLCOLS
(
ID "REPLACE(:ID,'<br>',chr(10))",
ISDELETED "CASE WHEN :ISDELETED='true' then 'T' ELSE 'F' END",
CASENUMBER "REPLACE(:CASENUMBER,'<br>',chr(10))",
DESCRIPTION CHAR(30000)
)
Now if I loaded another CSV file which contains exactly same data, the records fails to load. The only difference in this second csv file is that the records ends with LF and incomplete records ends with CRLF. I used the same ctl file but got the error : Rejected - Error on table ODI_DEV_TARGET.CASE, column DESCRIPTION. second enclosure string not present below is the snapshot of second csv file.
I also noticed that if I change INFILE option of ctl file to INFILE 'C:\Users\puspender.tanwar\Desktop\try_SQLLDR\csvfile.csv "str '\r\n'"' the record then also got loaded for first CSV(only). So I though if I use "str '\n'" instead of "str ' \r\n'" then second CSV records should load, but unfortunately that not happane.
Please advise me how to handle this by doing some modification in .ctl file. Or any other way to resolve this.
I'm quite new on Android and recently I've faced such issue:
Im using Android Contacts database, specifically Data table. I'm putting there some info with new mimetype and trying to look for this info during search. The problem is, i'm using SQLite LIKE operator which is Case Sensitive for non-latin characters. Another problem is that i can't change databse in any way, because it's android built-in database.
Builder builder = Data.CONTENT_URI.buildUpon();
loader.setSelection(getIndividualsSelection());
query = query.trim();
if( (null != query) && !query.equals("")){
loader.setSelection(Data.MIMETYPE + "='" +
MY_MIMETYPE + "' " + " AND ( " +
MY_DATA_COLUMN +
" LIKE '"+ query + "%' " +
specialCharsEscape + " COLLATE NOCASE)");
loader.setSelectionArgs(null);
loader.setUri(builder.build());
loader.setProjection(MY_PROJECTION);
loader.setSortOrder(MY_SORT_ORDER);
}
This is all inside of onCreateLoader funcion of LoaderCallbacks, where loader is of CursorLoader type. Do You have any idea how to force my SQLite not to be Case Sensitive?
I've tried off course using SQLite functions UCASE and LCASE but it doesn't work. Using Regexp results in exception for this database as well as using MATCH... Will appreciate any help.
Android has localized collations.
To actually be able to use a collation for comparisons, use BETWEEN instead of LIKE; for example:
MyDataColumn BETWEEN 'mąka' COLLATE UNICODE AND 'mąka' COLLATE UNICODE
is the last UTF-8-encodable Unicode character; a Java string probably would encode it with surrogates as "\uDBFF\uDFFF".
Please note that the UNICODE collation is broken in some Android versions.
I am trying to read a UTF-8 string from my MySql database, which I create using:
CREATE DATABASE april
DEFAULT CHARACTER SET utf8
DEFAULT COLLATE utf8_general_ci;
I make the table of interest using:
DROP TABLE IF EXISTS `article`;
CREATE TABLE `article` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`text` longtext NOT NULL,
`date_created` timestamp DEFAULT NOW(),
PRIMARY KEY (`id`)
) CHARACTER SET utf8;
If I select * from article in the MySql command line util, I get:
OIL sands output at Nexen’s Long Lake project dropped in February.
However, when I do
ResultSet rs = st.executeQuery(QUERY);
long id = -1;
String text = null;
Timestamp date = null;
while (rs.next()) {
text = rs.getString("text");
LOGGER.debug("text=" text);
}
the output I get is:
text=OIL sands output at Nexen’s Long Lake project dropped in February.
I get my Connection via:
DriverManager.getConnection("jdbc:" + this.dbms + "://" + this.serverHost + ":" + this.serverPort + "/" + this.dbName + "?useUnicode&user=" + this.username + "&password=" + this.password);
I've also tried, instead of the useUnicode parameter:
characterEncoding=UTF-8
and
characterEncoding=utf8
I also tried, instead of the line text = rs.getString("text")
rs.getBytes("text");
String[] encodings = new String[]{"US-ASCII", "ISO-8859-1", "UTF-8", "UTF-16BE", "UTF-16LE", "UTF-16", "Latin1"};
for (String encoding : encodings) {
text = new String(temp, encoding);
LOGGER.debug(encoding + ": " + text);
}
// Which outputted:
US-ASCII: OIL sands output at Nexen��������s Long Lake project dropped in February.
ISO-8859-1: OIL sands output at Nexenââ¬â¢s Long Lake project dropped in February.
UTF-8: OIL sands output at Nexen’s Long Lake project dropped in February.
UTF-16BE: 佉䰠獡湤猠潵瑰畴琠乥硥滃ꋢ芬ꉳ⁌潮朠䱡步⁰牯橥捴牯灰敤渠䙥扲畡特�
UTF-16LE: 䥏⁌慳摮畯灴瑵愠⁴敎數썮겂蓢玢䰠湯慌敫瀠潲敪瑣搠潲灰摥椠敆牢慵祲�
UTF-16: 佉䰠獡湤猠潵瑰畴琠乥硥滃ꋢ芬ꉳ⁌潮朠䱡步⁰牯橥捴牯灰敤渠䙥扲畡特�
Latin1: OIL sands output at Nexenââ¬â¢s Long Lake project dropped in February.
I load the strings into the DB using some pre-defined sql in a file. This file is UTF-8 encoded.
mysql -u april -p -D april < insert_articles.sql
This file includes the line:
INSERT INTO article (text) value ("OIL sands output at Nexen’s Long Lake project dropped in February.");
When I print out that file within my application using:
BufferedReader reader = new BufferedReader(new FileReader(new File("/home/path/to/file/sql_article_inserts.sql")));
String str;
while((str = reader.readLine()) != null) {
LOGGER.debug("LINE: " + str);
}
I get the correct, expected output:
LINE: INSERT INTO article (text) value ("OIL sands output at Nexen’s Long Lake project dropped in February.");
Any help would be much appreciated.
Some System Details:
I am running on linux (Ubuntu)
Edits:
* Edited to specify OS
* Edited to detail output of reading sql input file.
* Edited to specify more about how the data is inserted into the DB.
* Edited to to fix typo in code, and clarify example.
Is it possible you're reading the log file using the incorrect encoding? windows-1252, I am guessing.
UTF-8: OIL sands output at Nexen’s Long Lake project dropped in February.
If this is appearing in the log, do a hex dump of the log file. If the data is UTF-8, you would expect the sequence Nexen’s to become 4E 65 78 65 6E E2 80 99 73. If some other application reads this as a native ANSI encoding, it'll decode it as Nexen’s.
To confirm, you can also dump the individual characters of the return value to see if they are correct in UTF-16:
//untested
for(char ch : text.toCharArray()) {
System.out.printf("%04x%n", (int) ch);
}
I'm assuming all data is in the BMP, so you can just look up the results in the Unicode charts.
Try setting the database itself to UTF-8. When creating the DB:
CREATE DATABASE mydb
DEFAULT CHARACTER SET utf8
DEFAULT COLLATE utf8_general_ci;
Also see MySQL reference on connection charsets and MySQL reference on configuring charsets for applications
Parameters in the JDBC URL only define how the driver should communicate with the server. If the server does not use UTF8 by default these parameters won't change it either.
Have you tried executing the following SQL query after connecting? (This should switch the current connection to UTF8 on the server-side too):
SET names utf8
There are several character encodings involved.
The terminal/cmd window that the mysql command line tool is running. (putty?)
the environment in the shell (bash) where you are running your stuff. (LC_CTYPE)
Mysql internal (used in tables) : you have defined this to UTF-8
The JVM internal (always UTF16)
The character used by the writers the logger use. Default (system property) or perhaps defined in the logging frameworks configuration.
The terminal/cmd/editor that you read the logs with. ( putty/bash?)
If the terminal settings are wrong, you might have inserted corrupted data in mysql. (If your terminal is iso-8859-1 and you read a file that is UTF-8, for instance) Assuming linux, mysql should look at the env LC_CTYPE (but I am not 100% sure that it does.)
The JDBCD driver is responsible for converting the database character encoding to the JVMs internal format (UTF16) so that should not be a problem. But you can test this with a simpel java program that inserts a hard coded string, and reads it back. Print the original and received string - they should be identical. But; If both are wrong, you have a problem with the terminals character set definition.
Use a string like "HejÅÄÖ" for some drama...
ALso, write a small program that prints the same string to a file using a printwriter that converts to UTF-8 and verify that the tool you use for reading the log prints that file correctly. If not, there terminals settings are to be suspected, again.
String test = "Test HEJ \u00C5\u00C4\u00D6 ÅÄÖ";
// here's how to define what character set to use when writing to a fileOutputStream
PrintWriter pw = new PrintWriter("test.txt","UTF8");
pw.println(test);
pw.flush();
pw.close();
System.out.println(test);
output -> Test HEJ ÅÄÖ ÅÄÖ
The contents ni the file test.txt should look the same.