and i would like my data to show like this:
MySql table
but i only get this:
java
I used this query:
GROUP_CONCAT(a.montant, a.type_avance,a.date_avance,a.remark SEPARATOR '\n') as Avance
and it works just fine in MySql but it doesn't work in jframe.
this my java code:
//declaring the table
tb_imp_pr = new JTable();
tb_imp_pr.setRowHeight(50);
tb_imp_pr.setBackground(Color.WHITE);
scrollPane.setViewportView(tb_imp_pr);
// filling the table
public void filltable_pr() {
try {
connectore.statement = connectore.connection.prepareStatement("SELECT i.*,GROUP_CONCAT(a.montant,"
+" a.type_avance,a.date_avance,a.remark SEPARATOR '\n') as Avance "
+" FROM info_impayee i LEFT JOIN avance a ON i.n_dossier = a.n_dossier GROUP by i.n_dossier,i.date_dossier");
connectore.resultSet = connectore.statement.executeQuery();
tb_imp_pr.setModel(DbUtils.resultSetToTableModel(connectore.resultSet));
} catch (Exception ex) {
System.out.println();
}
}
Related
I have List which has names. I am providing name by using Scanner by using advance for loop and checking if the name is in the list it will update the client table else it will update . Category and client table both are in database .The problem is if my list has 4 names it should check if condition only and not go to the else statement till it check the 4 names , if name is not there than(which i give from scanner) should go to else condition needed a logic
String n = scann.nextLine();
List<String> li = new ArrayList<>();
li.add("abc");
li.add("def");
li.add("ghi");
li.add("jkl");
for (int i = 0; i < li.size(); i++) {
if (n.equals(li.get(i))) {
System.out.println("client table update");
} else {
System.out.println("category and client table update");
}
}
}
I kinda get the gist of what you are asking, but please be more precise.
From what i understand from your question is, you want to print either the first or the second thing, depending on wether your name is in the list or not. And in your case, both get printed.
That is because your prints are inside the for loop, thus printing 4 times and printing both options regardless, since your name cant be all 4 names at the same time.
Here is how i would solve this:
List<String> li = new ArrayList<>();
li.add("abc");
li.add("def");
li.add("ghi");
li.add("jkl");
boolean inList = false;
for (int i = 0; i < li.size(); i++) {
if (n.equals(li.get(i))) {
inList = true;
}
}
if (inList) {
System.out.println("client table update");
} else {
System.out.println("category and client table update");
}
You can simply use contains() to check whether the name exists in the list i.e.
if(li.contains(n)){
System.out.println("client table update");
}
else{
System.out.println("category and client table update");
}
String n = scann.nextLine();
List<String> li = new ArrayList<>();
li.add("abc");
li.add("def");
li.add("ghi");
li.add("jkl");
for (String data : li) {
if (li.contains(n)) {
System.out.println("client table update");
} else {
System.out.println("category and client table update");
}
}
I am trying to update hive-table partitions using Hive Java Api's.These are the below steps that i am following to achieve this:-
1.Extracting partitions which are not in metastore.
2.Adding these Partitions to table.
3.Going back to Hive-Command line and running show partitions and msck repair table command just to make sure everything is fine.
What i got:-
1.Show partitions is working fine(giving list of partitions which i have added).
2.MSCK Repair command is not working(getting this :Partitions are not present in metastore.)
Here is the piece of code that i am using :-
public class HiveMetastoreChecker {
public static void main(String[] args) {
final String dbName = "db_name";
final String tableName = "db_name.table_name";
CheckResult result = new CheckResult();
try {
Configuration configuration = new Configuration();
HiveConf conf = new HiveConf();
conf.addResource(configuration);
Hive hive = Hive.get(conf, true);
HiveMetaStoreChecker checker = new HiveMetaStoreChecker(hive);
Table table = new Table(dbName, tableName);
table.setDbName(dbName);
table.setInputFormatClass(TextInputFormat.class);
table.setOutputFormatClass(HiveIgnoreKeyTextOutputFormat.class);
table = hive.getTable(dbName, tableName);
checker.checkMetastore(dbName, tableName, null, result);
System.out.println(table.getDataLocation());
List<CheckResult.PartitionResult> partitionNotInMs = result.getPartitionsNotInMs();
System.out.println("not in ms " + partitionNotInMs.size());
List<org.apache.hadoop.hive.ql.metadata.Partition> partitions = hive.getPartitions(table);
System.out.println("partitions size " + partitions.size());
AddPartitionDesc apd = new AddPartitionDesc(table.getDbName(), table.getTableName(), false);
List<String> finalListOfPartitionsNotInMs = new ArrayList<String>();
for (CheckResult.PartitionResult part : partitionNotInMs){
if(!finalListOfPartitionsNotInMs.contains(part.getPartitionName().replace("/",""))){
finalListOfPartitionsNotInMs.add(part.getPartitionName().replace("/",""));
}
}
for (String partition:finalListOfPartitionsNotInMs) {
apd.addPartition(Warehouse.makeSpecFromName(partition), table.getDataLocation().toString());
}
hive.createPartitions(apd);
} catch (HiveException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
} catch (MetaException e) {
e.printStackTrace();
}
}
}
Any kind of help would be appreciated.
Thanks.
MSCK REPAIR is failing on HIVE? If yes then check if the Partition Column name is in CAPITAL Letters. I found the same issue where my PARTITION on aws s3 was like DCA=1000.
If that is the case then execute MSCK REPAIR using Spark SQL and it will owrk, in case you don't want to rename the partition into lower case.
Please help me, i get this errors when i run my app use Idea
here is screen of error
http://prntscr.com/en3lcu
here is part of code where create table
private String getTableDDL(final Class<? extends GlassContract.Table> table) {
return getTableDDL(table, GlassContract.getTableName(table));
}
private String getTableDDL(final Class<? extends GlassContract.Table> table, String tableName) {
final StringBuilder sql = new StringBuilder(128);
sql.append("create table ").append(tableName).append(" (");
for (final Field field : table.getFields()) {
if (field.getName().startsWith("_") || field.isAnnotationPresent(Deprecated.class))
continue;
try {
sql.append(field.get(null));
} catch (Exception ignore) {
}
try {
final Field type = table.getDeclaredField("_SQL_" + field.getName() + "_TYPE");
sql.append(' ').append(type.get(null));
} catch (Exception ignore) {
sql.append(" TEXT");
}
sql.append(',');
}
try {
final Field type = table.getDeclaredField("_PK_COMPOSITE");
sql.append("PRIMARY KEY(").append(type.get(null)).append(")");
sql.append(',');
} catch (Exception ignore) {
// ignore
}
try {
final Field type = table.getDeclaredField("_UNIQUE_COMPOSITE");
sql.append("UNIQUE(").append(type.get(null)).append(")");
sql.append(',');
} catch (Exception ignore) {
// ignore
}
sql.setLength(sql.length() - 1); // chop off last comma
sql.append(')');
Log.v(TAG, "DDL for " + table.getSimpleName() + ": " + sql);
return sql.toString();
}
I please help me, because I break my head))
Are you really trying to create text fields in your table that are named null?
Even if this works (and I am not sure it does), you are duplicating this and creating two identically named fields called null
The first field in that CREATE TABLE statement doesn't have a proper name (it is "null"). That's why it blows up.
There's more fields with illegal names as well.
I am working on a project in which I have three tables in a different database with different schemas. So that means I have three different connection parameters for those three tables to connect using JDBC-
Let's suppose-
For Table1-
Username:- A
Password:- B
URL: C
Columns-
ID1 String
Account1 String
For Table2-
Username:- P
Password:- Q
URL:- R
Columns-
ID2 String
Account2 String
For Table3-
Username:- T
Password:- U
URL:- V
Columns-
ID3 String
Account3 String
And I am supposed to insert in all the three tables or any one of them using JDBC.
Below are the three use cases I have-
From the command prompt if suppose I am passing Table1 only, then I am suppose to insert only in Table1 columns by making connection to
Table1.
And if I am passing Table1, Table2 from the command prompt then I am suppose to insert in both Table1 and Table2 columns by making
connection to Table1 and Table2.
And if I am passing Table1, Table2 and Table3 then I am suppose to enter in all the three tables using there respective connection
parameter
I am not able to understand how to write code for the above particular scenario in such a cleaner way so that it can be extended in near future as well if I come up with four tables. I can have a one constant file which can store the SQL that needs to be executed for any of the three tables and some other constant thing as well.
public static void main(String[] args) {
}
class Task implements Runnable {
private Connection dbConnection = null;
private PreparedStatement preparedStatement = null;
public Task() {
}
#Override
public void run() {
dbConnection = getDbConnection();
//prepare the statement and execute it
}
}
private Connection getDBConnection() {
Connection dbConnection = null;
Class.forName(Constants.DRIVER_NAME);
dbConnection = DriverManager.getConnection( , , );
return dbConnection;
}
Can anyone provide some thoughts on this how should I proceed forward?
Note:-
Column in each table will differ a lot. Like in some tables, column can be 10 and in some other table, column can be 20.
Create databases.properties file with content like this:
# Table 1
table1.url: jdbc:mysql://localhost:3306/garden
table1.user: gardener
table1.password: shavel
table1.table: fruits
table1.column.id: fruitID
table1.column.color: fruitColor
table1.column.weight: fruitWeight
# ... More fruit columns here ...
# Table 2
table2.url: jdbc:mysql://otherhost:3306/forest
table2.user: forester
table2.password: axe
table2.table: trees
table2.column.id: treeID
table2.column.height: treeHeight
# ... More tree columns here ...
# ... More tables here ...
Then do something like this:
public static void main (String [] args)
{
Properties databasesProperties = new Properties ();
databasesProperties.load ("databases.properties");
for (String arg: args)
{
String url = databasesProperties.get (arg + ".url");
String user = databasesProperties.get (arg + ".user");
String password= databasesProperties.get (arg + ".password");
String table = databasesProperties.get (arg + ".table");
String columnPrefix = arg + ".column."
Map <String, String> columns = new HashMap <String, String> ();
for (String key: databasesProperties.stringPropertyNames ())
{
if (key.startsWith (columnPrefix))
columns.put (
key.substring (columnPrefix.length ()),
databasesProperties.get (key));
}
doInsert (url, user, password, table, columns);
}
}
Later you can always add more tables into your databases.properties file.
Save your Database properties in a class file DBPropery.java.
final class DBProperty
{
static String[] urls = {
"C",
"R",
"V"
}; //You can add more URLs here.
static String[] driver= {
"Driver1",
"Driver2",
"Driver3"
};//You can add more drivers string
static String[] table = {
"Table1",
"Table2",
"Table3"
};//You can add more table names here According to URLs mentioned in urls array.
static String[] user = {
"A",
"P",
"T"
};//You can add more user names here according to URls mentioned in urls array.
static String[] pwd = {
"B",
"Q",
"U"
};//You can add more Password here according to URls mentioned in urls array.
static String[] queries = {
"Query for Table1",
"Query for Table2",
"Query for Table3",
};//You can add more queries here for more tables according to URls mentioned in urls array.
static int[] columns ={
2,
2,
2
};//You can change the column numbers according to need . 0th index belongs to Table1 , 1 to table2....so on.
//If you add more tables , add corresponding columns count to next index.
static String[] columnValues ={
"1^John",
"34^Vicky",
"65^Ethen"
};//String at each index represents a row in corresponding table in table[] array. each column is seperated by delimiter "^".
}
Make all Changes in DBProperty.java file.
Then proceed with following class file
import java.sql.*;
import java.util.*;
class MultiTableInsert implements Runnable
{
Map<String,Integer> columnsInTable;
Map<String,String> tableDriver;
Map<String,String> rowForTable;
Map<String,String> queryForTable;
Map<String,String> urlForTable;
Map<String,String> userForTable;
Map<String,String> pwdForTable;
String[] tables ;
public MultiTableInsert(String... tables)//Loading all Database Settings here..
{
this.tables = tables;
columnsInTable = new LinkedHashMap<String,Integer>();
rowForTable = new LinkedHashMap<String,String>();
tableDriver = new LinkedHashMap<String,String>();
urlForTable = new LinkedHashMap<String,String>();
userForTable= new LinkedHashMap<String,String>();
pwdForTable = new LinkedHashMap<String,String>();
for (int i = 0 ; i < DBProperty.urls.length ; i++ )
{
try
{
tableDriver.put(DBProperty.table[i],DBProperty.driver[i]);
queryForTable.put(DBProperty.table[i],DBProperty.queries[i]);
columnsInTable.put(DBProperty.table[i],DBProperty.columns[i]);
rowForTable.put(DBProperty.table[i],DBProperty.columnValues[i]);
urlForTable.put(DBProperty.table[i],DBProperty.urls[i]);
userForTable.put(DBProperty.table[i],DBProperty.user[i]);
pwdForTable.put(DBProperty.table[i],DBProperty.pwd[i]);
}
catch (Exception ex)
{
ex.printStackTrace();
}
}
}
#Override
public void run()
{
insertIntoTable(tables);
}
private void insertIntoTable(String... tables)
{
for (String tble : tables )
{
Connection con = null;
PreparedStatement pStmt = null;
try
{
Class.forName(tableDriver.get(tble));
con = DriverManager.getConnection(urlForTable.get(tble),userForTable.get(tble),pwdForTable.get(tble));
pStmt = con.prepareStatement(queryForTable.get(tble));
int columns = columnsInTable.get(tble);
String sRow = rowForTable.get(tble);
StringTokenizer tokenizer = new StringTokenizer(sRow,"^");
for (int i = 0; i < columns ; i++)
{
pStmt.setString(i+1,(String)tokenizer.nextElement());
}
pStmt.execute();
}
catch (Exception ex)
{
ex.printStackTrace();
}
finally
{
try
{
con.close();
}catch (Exception ex){}
try
{
pStmt.close();
}catch (Exception ex){}
}
}
}
public static void main(String[] args)
{
int length = args.length;
int THREAD_COUNTS = 10;//Number of threads you want to start.
switch (length)
{
case 0:
System.out.println("Usage: javac MultiTableInsert Table1/Table2/Table3 <Table1/Table2/Table3> <Table1/Table2/Table3>");
System.exit(0);
case 1:
for (int i = 0 ; i < THREAD_COUNTS ; i++)
{
MultiTableInsert mti = new MultiTableInsert(args[0]);
Thread th = new Thread(mti,"Thread"+i);//Create New Thread
th.start(); //Start Thread
}
break;
case 2:
for (int i = 0 ; i < THREAD_COUNTS ; i++)
{
MultiTableInsert mti = new MultiTableInsert(args[0],args[1]);//Create New Thread
Thread th = new Thread(mti,"Thread"+i); //Start Thread
th.start();
}
break;
default:
for (int i = 0 ; i < THREAD_COUNTS ; i++)
{
MultiTableInsert mti = new MultiTableInsert(args[0],args[1],args[2]);//Create New Thread
Thread th = new Thread(mti,"Thread"+i); //Start Thread
th.start();
}
break;
}
}
}
I've installed HBase 0.94.0. I had to improve my read performance through scan. I've inserted random 100000 records.
When I set setCache(100); my performance was 16 secs for 100000 records.
When I set it to setCache(50) my performance was 90 secs for 100000 records.
When I set it to setCache(10); my performance was 16 secs for 100000 records
public class Test {
public static void main(String[] args) {
long start, middle, end;
HTableDescriptor descriptor = new HTableDescriptor("Student7");
descriptor.addFamily(new HColumnDescriptor("No"));
descriptor.addFamily(new HColumnDescriptor("Subject"));
try {
HBaseConfiguration config = new HBaseConfiguration();
HBaseAdmin admin = new HBaseAdmin(config);
admin.createTable(descriptor);
HTable table = new HTable(config, "Student7");
System.out.println("Table created !");
start = System.currentTimeMillis();
for(int i =1;i<100000;i++) {
String s=Integer.toString(i);
Put p = new Put(Bytes.toBytes(s));
p.add(Bytes.toBytes("No"), Bytes.toBytes("IDCARD"),Bytes.toBytes("i+10"));
p.add(Bytes.toBytes("No"), Bytes.toBytes("PHONE"),Bytes.toBytes("i+20"));
p.add(Bytes.toBytes("No"), Bytes.toBytes("PAN"),Bytes.toBytes("i+30"));
p.add(Bytes.toBytes("No"), Bytes.toBytes("ACCT"),Bytes.toBytes("i+40"));
p.add(Bytes.toBytes("Subject"), Bytes.toBytes("English"),Bytes.toBytes("50"));
p.add(Bytes.toBytes("Subject"), Bytes.toBytes("Science"),Bytes.toBytes("60"));
p.add(Bytes.toBytes("Subject"), Bytes.toBytes("History"),Bytes.toBytes("70"));
table.put(p);
}
middle = System.currentTimeMillis();
Scan s = new Scan();
s.setCaching(100);
ResultScanner scanner = table.getScanner(s);
try {
for (Result rr = scanner.next(); rr != null; rr=scanner.next()) {
System.out.println("Found row: " + rr);
}
end = System.currentTimeMillis();
} finally {
scanner.close();
}
System.out.println("TableCreation-Time: " + (middle - start));
System.out.println("Scan-Time: " + (middle - end));
} catch (IOException e) {
System.out.println("IOError: cannot create Table.");
e.printStackTrace();
}
}
}
Why is this happening?
Why would you want to return every record in your 100000 records table? You're doing a full
table scan and just as in any large database this is slow.
Try thinking about a more useful use case in which you would like to return some columns of a record or a range of records.
HBase does only have one index on it's table, the row key. Make use of that. Try defining your row key so that you can get the data you need just by specifying the row key.
Let's say you would like to know the value of Subject:History for the rows with a
row key between 80000 and 80100. (Note that setCaching(100) means HBase will fetch 100 records per RPC and is this case thus one. Fetching 100 rows obviously requires more memory opposed to fetching, let's say, one row. Keep that in mind in a large multi-user environment.)
Long start, end;
start = System.currentTimeMillis();
Scan s = new Scan(String.valueOf(80000).getBytes(), String.valueOf(80100).getBytes());
s.setCaching(100);
s.addColumn("Subject".getBytes(), "History".getBytes());
ResultScanner scanner = table.getScanner(s);
try {
for (Result rr = scanner.next(); rr != null; rr=scanner.next()) {
System.out.println("Found row: " + new String(rr.getRow(), "UTF-8") + " value: " + new String(rr.getValue("Subject".getBytes(), "History".getBytes()), "UTF-8")));
}
end = System.currentTimeMillis();
} finally {
scanner.close();
}
System.out.println("Scan: " + (end - start));
This might look stupid because how would you know which rows you need just by an integer? Well, exactly, but that's why you need to design a row key according to what you're about to query instead of just using an incremental value as you would in a traditional database.
Try this example. It should be fast.
Note: I didn't run the example. I just typed it here. Maybe there are some small syntax errors you should correct but I hope the idea is clear.