The followings works, but how can I collect multiple MapSqlParameterSource and insert them all in one batch?
new SimpleJdbcInsert(ds).withTableName(TABLENAME);
MapSqlParameterSource entry = new MapSqlParameterSource()
.addValue("id", report.queryId, Types.INTEGER)
.addValue("firstname", report.reportDate, Types.DATE)
.addValue("age", report.completionRatio, Types.INTEGER);
insert.execute(entry);
Luckily SimpleJdbcInsert can take an array (not a list) of MapSqlParameterSource. So it's possible as follows:
List<MapSqlParameterSource> entries = new ArrayList<>();
entries.add(entry);
MapSqlParameterSource[] array = entries.toArray(new MapSqlParameterSource[entries.size()]);
insert.executeBatch(array);
There is a better way of doing it with SqlParameterSourceUtils
private final List<Map<String, Object>> records = new LinkedList<>();
final SimpleJdbcInsert statement = new SimpleJdbcInsert(dataSource)
.withTableName("stats")
.usingGeneratedKeyColumns("id")
.usingColumns("document", "error", "run", "celex");
statement.executeBatch(SqlParameterSourceUtils.createBatch(records));
Related
I want to send data from the list as input to execute a stored procedure. In code below, all variable list which contains to be sent as an input parameter.
public void onClick$btnSend() throws Exception {
Workbook workbook = new Workbook("D:/excel file/Mapping Prod Matriks _Group Sales Commercial.xlsx");
com.aspose.cells.Worksheet worksheet = workbook.getWorksheets().get(0);
com.aspose.cells.Cells cells = worksheet.getCells();
Range displayRange = cells.getMaxDisplayRange();
List<String> ParaObjGroup = new ArrayList<String>();
List<String> ParaObjCode = new ArrayList<String>();
List<String> ParaProdMatrixId = new ArrayList<String>();
List<String> ParaProdChannelId = new ArrayList<String>();
List<String> ParaProdSalesGroupId = new ArrayList<String>();
List<String> ParaCustGroup = new ArrayList<String>();
List<String> ParaSlsThroughId = new ArrayList<String>();
List<Integer> Active = new ArrayList<Integer>();
for(int row= displayRange.getFirstRow()+1;row<displayRange.getRowCount();row++){
ParaObjGroup.add(displayRange.get(row,1).getStringValue());
ParaObjCode.add(displayRange.get(row,3).getStringValue());
ParaProdMatrixId.add(displayRange.get(row,5).getStringValue());
ParaProdChannelId.add(displayRange.get(row,7).getStringValue());
ParaProdSalesGroupId.add(displayRange.get(row,9).getStringValue());
ParaCustGroup.add(displayRange.get(row,11).getStringValue());
ParaSlsThroughId.add(displayRange.get(row,13).getStringValue());
Active.add(displayRange.get(row,14).getIntValue());
}
System.out.println(ParaObjGroup);
System.out.println(ParaObjCode);
System.out.println(ParaProdMatrixId);
System.out.println(ParaProdChannelId);
System.out.println(ParaProdSalesGroupId);
System.out.println(ParaCustGroup);
System.out.println(ParaSlsThroughId);
System.out.println(Active);
lovService.coba(ParaObjGroup,ParaObjCode,ParaProdMatrixId,ParaProdChannelId,ParaProdSalesGroupId,ParaCustGroup,ParaSlsThroughId,Active);
}
and below code for execute stored procedure
#Transactional(propagation = Propagation.REQUIRED, rollbackFor = {SQLException.class, Exception.class })
public void executeSPForInsertData(DataSource ds,String procedureName,Map<String[], Object[]> inputParameter){
SimpleJdbcCall jdbcCall = new SimpleJdbcCall(paramsDataSourceBean).withProcedureName(procedureName);
jdbcCall.execute(inputParameter);
}
But I have a problem cannot set the list type as a parameter in method put:
#ServiceLog(schema = ConstantaVariable.DBDefinition_Var.PARAMS_DB_SCHEMA, sp = ConstantaVariable.PARAMSProcedure_VAR.PR_SP_FAHMI)
#Transactional(propagation=Propagation.REQUIRED, rollbackFor={Exception.class,SQLException.class})
public void coba(List<String> params1,List<String> params2,List<String> params3,List<String> params4,List<String> params5,
List<String> params6,List<String> params7,List<Integer> params8){
Map<String[], Object[]> mapInputParameter = new LinkedHashMap<String[], Object[]>();
mapInputParameter.put("P_OBJT_GROUP", params1);
mapInputParameter.put("P_CODE", params2);
mapInputParameter.put("P_PROD_MATRIX_ID", params3);
mapInputParameter.put("P_PROD_CHANNEL_ID", params4);
mapInputParameter.put("P_PROD_SALES_GROUP_ID", params5);
mapInputParameter.put("P_CUST_GROUP", params6);
mapInputParameter.put("P_SLS_THROUGH_ID", params7);
mapInputParameter.put("P_ACTIVE", params8);
ParamsService.getService().executeSPForInsertData(null,ConstantaVariable.PARAMSProcedure_VAR.PR_SP_FAHMI,mapInputParameter);
}
The type Map<String[], Object[]> is not compatible with what you try to put in: The key is String and the value is List<String>.
There are two solutions:
Change the map to be compatible with the inserted parameters.
Map<String, List<String>> mapInputParameter = new LinkedHashMap<>();
If you need to use the original map type, then you have to change the way you put the parameters into the map.
Map<String[], Object[]> mapInputParameter = new LinkedHashMap<>();
mapInputParameter.put(new String[] { "P_OBJT_GROUP" }, new Object[] { params1 });
mapInputParameter.put(new String[] { "P_CODE" }, new Object[] { params2 });
The drawback is that in further processing you have to check if the array is not empty and cast explicitly from Object to List<String>.
If you want something "more universal and more generic", I'd go for Map<String, List<Object>>. In any way, I find no reason to use an array in the map unless it is explicitly required (I have no information about the executeSPForInsertData method.
I have dynamodb table structure as follows:
{
"id": "1",
"skills": {
"skill1": "html",
"skill2": "css"
}
}
I have task to filter by skills value, In order to complete my task wrote java logic as follows:
AmazonDynamoDB client = dynamoDBService.getClient();
DynamoDB dynamoDB = new DynamoDB(client);
Table table = dynamoDB.getTable("dummy");
Map<String, String> attributeNames = new HashMap<String, String >();
attributeNames.put("#columnValue", "skills.skill1");
Map<String, AttributeValue> attributeValues = new HashMap<String, AttributeValue>();
attributeValues.put(":val1", new AttributeValue().withS("html"));
ScanSpec scanSpec = new ScanSpec().withProjectionExpression("skills.skill1")
.withFilterExpression("#columnValue = :val1 ").withNameMap(new NameMap().with("#columnValue", "skills.skill1"))
.withValueMap(new ValueMap().withString(":val1", "html"));
ItemCollection<ScanOutcome> items = table.scan(scanSpec);
Iterator<Item> iter = items.iterator();
while (iter.hasNext()) {
Item item = iter.next();
System.out.println("--------"+item.toString());
}
The mentioned code does not help me out. Any solution ?
You can use a ProjectionExpression to retrieve only specific attributes or elements, rather than an entire item. A ProjectionExpression can specify top-level or nested attributes, using document paths.
for example from AWS:
GetItemSpec spec = new GetItemSpec()
.withPrimaryKey("Id", 206)
.withProjectionExpression("Id, Title, RelatedItems[0], Reviews.FiveStar")
.withConsistentRead(true);
Item item = table.getItem(spec);
System.out.println(item.toJSONPretty());
Simple solution to this problem is:
First fetch all the records from the table.
Then iterate over the list of that object.
Extract the skills from each object.
Wrote your logic to do filtering.
Repeat the loop till the last record.
I found solution,scanSpec should be as follows:
ScanSpec scanSpec = new ScanSpec()
.withFilterExpression("#category.#uid = :categoryuid").withNameMap(new NameMap().with("#category","skills").with("#uid",queryString))
.withValueMap(new ValueMap().withString(":categoryuid", queryString));
I'm wondering if we can perform Batch Write/Update with low-level API for DynamoDB for java.
Thanks in advance!
Yes. Something like this:
Map<String, List<WriteRequest>> writeRequestItems = new HashMap<String, List<WriteRequest>>();
Map<String, AttributeValue> userItem1 = new HashMap<String, AttributeValue>();
userItem1.put("userId", new AttributeValue().withS("1"));
userItem1.put("name", new AttributeValue().withS("Alex"));
Map<String, AttributeValue> userItem2 = new HashMap<String,AttributeValue>();
userItem2.put("userId", new AttributeValue().withS("2"));
userItem2.put("name", new AttributeValue().withS("Jonh"));
List<WriteRequest> userList = new ArrayList<WriteRequest>();
userList.add(new WriteRequest().withPutRequest(new PutRequest().withItem(userItem1)));
userList.add(new WriteRequest().withPutRequest(new PutRequest().withItem(userItem2)));
writeRequestItems.put("User", userList);
BatchWriteItemRequest batchWriteItemRequest = new BatchWriteItemRequest(writeRequestItems);
BatchWriteItemResult batchWriteItemResult = dynamoDBClient.batchWriteItem(batchWriteItemRequest);
Yes. You can use AmazonDynamoDB class to perform these operations.
Check http://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/dynamodbv2/AmazonDynamoDB.html#batchWriteItem-com.amazonaws.services.dynamodbv2.model.BatchWriteItemRequest-
I'm trying to use Spark (Java API) to take an in-memory Map (that potentially contains other nested Maps as its values) and convert it into a dataframe. I think I need something along these lines:
Map myMap = getSomehow();
RDD myRDD = sparkContext.makeRDD(myMap); // ???
DataFrame df = sparkContext.read(myRDD); // ???
But I'm having a tough time seeing the forest through the trees here...any ideas? Again this might be a Map<String,String> or a Map<String,Map>, where there could be several nested layers of maps-inside-of-maps-inside-of-maps, etc.
So I tried something, not sure if this is the most efficient option to do it, but I do not see any other right now.
SparkConf sf = new SparkConf().setAppName("name").setMaster("local[*]");
JavaSparkContext sc = new JavaSparkContext(sf);
SQLContext sqlCon = new SQLContext(sc);
Map map = new HashMap<String, Map<String, String>>();
map.put("test1", putMap);
HashMap putMap = new HashMap<String, String>();
putMap.put("1", "test");
List<Tuple2<String, HashMap>> list = new ArrayList<Tuple2<String, HashMap>>();
Set<String> allKeys = map.keySet();
for (String key : allKeys) {
list.add(new Tuple2<String, HashMap>(key, (HashMap) map.get(key)));
};
JavaRDD<Tuple2<String, HashMap>> rdd = sc.parallelize(list);
System.out.println(rdd.first());
List<StructField> fields = new ArrayList<>();
StructField field1 = DataTypes.createStructField("String", DataTypes.StringType, true);
StructField field2 = DataTypes.createStructField("Map",
DataTypes.createMapType(DataTypes.StringType, DataTypes.StringType), true);
fields.add(field1);
fields.add(field2);
StructType struct = DataTypes.createStructType(fields);
JavaRDD<Row> rowRDD = rdd.map(new Function<Tuple2<String, HashMap>, Row>() {
#Override
public Row call(Tuple2<String, HashMap> arg0) throws Exception {
return RowFactory.create(arg0._1, arg0._2);
}
});
DataFrame df = sqlCon.createDataFrame(rowRDD, struct);
df.show();
In this scenario I assumed that the Map in the Dataframe is of Type (String, String). Hope this helps!
Edit: Obviously you can delete all the prints. I did this for visualization purposes!
New to Spring, I am trying to insert a List<Map<String, Object>> into a table. Until now I have been using the SqlParameterSource for batch update, which works fine when a java bean is supplied to them. Something like this:
#Autowired
private NamedParameterJDBCTemplate v2_template;
public int[] bulkInsertIntoSiteTable(List<SiteBean> list){
SqlParameterSource[] batch = SqlParameterSourceUtils
.createBatch(list.toArray());
int[] updateCounts = v2_template
.batchUpdate(
"insert into sitestatus (website, status, createdby) values (:website, :status, :username)",
batch);
return updateCounts;
}
However, I tried the same technique with a list of maps in place of a bean, it failed (rightly so).
public int[] bulkInsertIntoSiteTable(List<Map<String, Object>> list){
SqlParameterSource[] batch = SqlParameterSourceUtils
.createBatch(list.toArray());
int[] updateCounts = v2_template
.batchUpdate(
"insert into sitestatus (website, status, createdby) values (:website, :status, :username)",
batch);
return updateCounts;
}
The above code failed with the following exception:
Exception in thread "main" org.springframework.dao.InvalidDataAccessApiUsageException: No value supplied for the SQL parameter 'website': Invalid property 'website' of bean class [org.springframework.util.LinkedCaseInsensitiveMap]: Bean property 'website' is not readable or has an invalid getter method: Does the return type of the getter match the parameter type of the setter?
at org.springframework.jdbc.core.namedparam.NamedParameterUtils.buildValueArray(NamedParameterUtils.java:322)
at org.springframework.jdbc.core.namedparam.NamedParameterBatchUpdateUtils$1.setValues(NamedParameterBatchUpdateUtils.java:45)
at org.springframework.jdbc.core.JdbcTemplate$4.doInPreparedStatement(JdbcTemplate.java:893)
at org.springframework.jdbc.core.JdbcTemplate$4.doInPreparedStatement(JdbcTemplate.java:1)
at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:587)
at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:615)
at org.springframework.jdbc.core.JdbcTemplate.batchUpdate(JdbcTemplate.java:884)
at org.springframework.jdbc.core.namedparam.NamedParameterBatchUpdateUtils.executeBatchUpdateWithNamedParameters(NamedParameterBatchUpdateUtils.java:40)
at org.springframework.jdbc.core.namedparam.NamedParameterJdbcTemplate.batchUpdate(NamedParameterJdbcTemplate.java:303)
at tester.utitlies.dao.VersionTwoDao.bulkInsertIntoSites(VersionTwoDao.java:21)
at tester.utitlies.runner.Main.main(Main.java:28)
It fails as it considers the list to be a batch of beans, I guess. I cannot find a way to perform a batch update in Spring with a list of maps and using NamedParameterJDBCTemplate. Please advice.
As per Spring NamedParameterJDBCTemplate docs, found here, this method can be used for batch updating with maps.
int[] batchUpdate(String sql, Map<String,?>[] batchValues)
The real challange was to a get an array of Map<String, Object> from a corresponding List<Map<String, Object>>. I used the following code to get the array and perform the batch update.
public static Map<String, Object>[] getArrayData(List<Map<String, Object>> list){
#SuppressWarnings("unchecked")
Map<String, Object>[] maps = new HashMap[list.size()];
Iterator<Map<String, Object>> iterator = list.iterator();
int i = 0;
while (iterator.hasNext()) {
Map<java.lang.String, java.lang.Object> map = (Map<java.lang.String, java.lang.Object>) iterator
.next();
maps[i++] = map;
}
return maps;
}
You can not directly use your bean in NamedParameterJdbcTemplate's batchUpdate, NamedParameterJdbcTemplate's batchUpdate accepts params in form of array only. Either an array of SqlParameterSource or an array of Map.
Here I will demonstrate how you can use array of Map to achieve your goal.
Considering the above problem, convert your List of Bean into array of map,
Each map corresponds to one row to be inserted or One Bean object, field and its value are stored as key-value pair inside the map where key is the field name and value is the value of the field under consideration.
#Autowired
private NamedParameterJDBCTemplate v2_template;
public int[] bulkInsertIntoSiteTable(List<SiteBean> list){
String yourQuery = "insert into sitestatus (website, status, createdby)
values (:website, :status, :username)"
Map<String,Object>[] batchOfInputs = new HashMap[list.size()];
int count = 0;
for(SiteBean sb : list.size()){
Map<String,Object> map = new HashMap();
map.put("website",sb.getWebsite());
map.put("status",sb.getStatus());
map.put("username",sb.getUsername());
batchOfInputs[count++]= map;
}
int[] updateCounts = v2_template.batchUpdate(yourQuery,batchOfInputs);
return updateCounts;
}
There is another way to avoid #SuppressWarnings("unchecked").
public static final String INSERT_INTO = "INSERT INTO {0} ({1}) VALUES ({2})";
private NamedParameterJdbcTemplate template;
template.batchUpdate(insertQuery(rowMaps.get(0)), batchArgs(mapRows));
/**
* Create SQL instruction INSERT from Map record
*
* #return literal INSERT INTO [schema].[prefix][table_name] (column1, column2, column3, ...)
* VALUES (value1, value2, value3, ...);
*/
public String insertQuery(Map<String, String> rowMap) {
String schemaTable = Objects.isNull(getSchema()) ? table : getSchema() + "." + table;
String splittedColumns = String.join(",", rowMap.keySet());
String splittedValues = rowMap.keySet().stream()
.map(s -> ":" + s).collect(Collectors.joining(","));
return MessageFormat.format(INSERT_INTO, schemaTable, splittedColumns, splittedValues);
}
private MapSqlParameterSource[] batchArgs(List<Map<String, String>> mapRows) {
int size = mapRows.size();
MapSqlParameterSource[] batchArgs = new MapSqlParameterSource[size];
IntStream.range(0, size).forEach(i -> {
MapSqlParameterSource args = new MapSqlParameterSource(mapRows.get(i));
batchArgs[i] = args;
});
return batchArgs;
}
Best regards
One working snippet :
public List builkInsert(String insert,List details) {
Map<String, Object>[] maps = new HashMap[details.size()];
Map<String, Object>[] batchValues = (Map<String, Object>[]) details.toArray(maps);
int[] response= namedParameterJdbcTemplate.batchUpdate(insert, batchValues);
return Arrays.asList(response);
}
I tested with code.
Map<String, Object>[] rs = new Map<String, Object>[1];
Map<String, Object> item1 = new HashMap<>();
item1.put("name", "Tien Nguyen");
item1.put("age", 35);
rs[0] = item1;
NamedParameterJdbcTemplate jdbc = new NamedParameterJdbcTemplate(datasource);
// datasource from JDBC.
jdbc.batchUpdate("call sp(:name, :age)", rs);
Hope it is easy to know.
Thanks