I have simple table with several required and nullable columns. My java application writes data into it via JsonStreamWriter. Most of time everything is ok, but sometimes it fails with error
java.util.concurrent.ExecutionException:
com.google.api.gax.rpc.PermissionDeniedException:
io.grpc.StatusRuntimeException: PERMISSION_DENIED: Permission
'TABLES_UPDATE_DATA' denied on resource
'projects/project-name/datasets/dataset-name/tables/table-name' (or it
may not exist).
Data the similar, I am just append it, without update and I have no idea what goes wrong.
private Queue<Map<String, Object>> queue = new ConcurrentLinkedQueue<>();
private JsonStreamWriter streamWriter;
#Autowired
private BigQueryManager manager;
#PostConstruct
private void initialize() {
WriteStream stream = WriteStream.newBuilder().setType(WriteStream.Type.COMMITTED).build();
TableName parentTable = TableName.of(project, dataset, table);
CreateWriteStreamRequest writeStreamRequest = CreateWriteStreamRequest.newBuilder().setParent(parentTable.toString()).setWriteStream(stream).build();
WriteStream writeStream = manager.getClient().createWriteStream(writeStreamRequest);
try {
streamWriter = JsonStreamWriter.newBuilder(writeStream.getName(), writeStream.getTableSchema(), manager.getClient()).build();
} catch (Exception ex) {
log.error("Unable to initialize stream writer.", ex);
}
}
#Override
public void flush() {
try {
List<Pair<JSONArray, Future>> tasks = new ArrayList<>();
while (!queue.isEmpty()) {
JSONArray batch = new JSONArray();
JSONObject record = new JSONObject();
queue.poll().forEach(record::put);
batch.put(record);
tasks.add(new Pair<>(batch, streamWriter.append(batch)));
}
List<AppendRowsResponse> responses = new ArrayList<>();
tasks.forEach(task -> {
try {
responses.add((AppendRowsResponse) task.getValue().get());
} catch (Exception ex) {
log.debug("Exception while task {} running: {}", task.getKey(), ex.getMessage(), ex);
}
});
responses.forEach(response -> {
if (!"".equals(response.getError().getMessage())) {
log.error(response.getError().getMessage());
}
});
} finally {
streamWriter.close();
}
}
#Override
public void addRow(Map<String, Object> row) {
queue.add(row);
}
This issue was fixed in v1.20.0. If you’re using a lower version, consider updating the library. I you’re using a higher version you could try constructing the JsonStreamWriter builder with BigQuery client being initialized by StreamWriter by default:
streamWriter = JsonStreamWriter.newBuilder(writeStream.getName(), writeStream.getTableSchema()).build();
Related
I'm working on a aws lambda function that reads streams from Kinesis and insert the data in a database.
enter image description here
So the first scenario is when im sending only one recorde to kinesis everything goes as excpected and i can see the record in my database
The second scenario is when i send records as batch mode (1000 records), after the execution i check kinesis incaming data metric and i can see the all the 1000 records are arrived, next step is checking the lambda function logs, and there i can see that only some records are executed (2 or 3).
and that is confirmed when i check the database, only 2 or 3 records got inserted.
I can't understand why my lambda is not handling all the records ? the logs also are not telling me much, knowing that i'm handling all the exceptions so no error eccured on the lambda error count metric.
I cant really tell if kinesis is not invoking the lambda on all the records, or if it some kind of configuration that im midding or something else
Here is the methode that sends to kinesis from the batch :
#Override
public void sendChannelRecords(Channel channel) {
String kinesisStreamName = dataLakeConfig.getKinesisStreamName();
try {
byte[] bytes = (conversionToJsonUtils.convertObjectToJsonString(channel) + "\n").getBytes("UTF-8");
PutRecordRequest putRecordRequest = new PutRecordRequest()
.withPartitionKey(String.valueOf(random.nextInt()))
.withStreamName(kinesisStreamName)
.withData(ByteBuffer.wrap(bytes));
callKinesisAsync(putRecordRequest);
} catch (Exception e) {
log.error("an error has been occurred during sending Channel to Kinesis : " + e.getMessage());
}
}
The handle request of the lambda fucntion :
public class RocEventProcessorToTeradata implements RequestHandler<KinesisEvent, Object> {
/**
* This function write different Collections in Teradata warehouse.
* Pour cost reasons, we chose to use only a single Kinesis to receive all collections from ROC-API.
*/
private static Logger log = (Logger) LoggerFactory.getLogger(Logger.ROOT_LOGGER_NAME);
private static String LOG_LEVEL = System.getenv("LOGGING_LEVEL");
/**
* Note that : teradata connexion must be opened & closed inside the the handleRequest method only otherwise connexion remains opened !
*/
#SneakyThrows
public Object handleRequest(KinesisEvent event, Context context) {
log.setLevel(Level.valueOf(LOG_LEVEL));
TeradataParams teradataParams = buildTeradataParams();
List<UserRecord> userRecordsList = RecordDeaggregator.deaggregate(event.getRecords());
Connection conn = null;
Channel channel = new ConversionToJava<Channel>().getAsGivenEntity(new String(userRecordsList.get(0).getData().array()), Channel.class);
boolean isInstanceOfChannel = channel.getEpgid() != null;
try {
Class.forName("com.teradata.jdbc.TeraDriver");
conn = DriverManager.getConnection(teradataParams.getTeradataUrl(), teradataParams.getTeradataUser(), teradataParams.getTeradataPwd());
if (isInstanceOfChannel) {
InputFactory.getInputObject(InputType.CHANNEL).apply(conn, userRecordsList, teradataParams);
}
} catch (SQLException | ClassNotFoundException e) {
log.error(" an error has been occurred during connection opening Teradata : {}", e.getMessage());
throw e;
} finally {
//close resources associated to the connection as teradata doesn't allow more than 16 opened resources in the same connection
try {
if (conn != null) {
conn.close();
}
} catch (Exception e) {
log.error("an error has been occurred when closing Teradata connection : {}", e.getMessage());
}
}
return "";
}
private TeradataParams buildTeradataParams() {
String teradataUser = getTeradataStoreParam().get(TeradataConnectionParams.USERNAME_PARAM);
String teradataPwd = getTeradataStoreParam().get(TeradataConnectionParams.PASSWORD_PARAM);
String teradataUrl = getTeradataStoreParam().get(TeradataConnectionParams.DATABASE_URL);
String connurl = "jdbc:teradata://" + teradataUrl + "/database=My_DB,tmode=ANSI,charset=UTF8";
return TeradataParams.builder().teradataUser(teradataUser).teradataPwd(teradataPwd).teradataUrl(connurl).build();
}
private Map<String, String> getTeradataStoreParam() {
return new ParameterStoreImpl().
getParameterStore(
Lists.newArrayList(TeradataConnectionParams.USERNAME_PARAM,
TeradataConnectionParams.PASSWORD_PARAM,
TeradataConnectionParams.DATABASE_URL)
);
}
}
#Override
public void apply(Connection conn, List<UserRecord> userRecordsList, TeradataParams teradataParams) {
log.setLevel(Level.valueOf(LOG_LEVEL));
try {
Channel channel = new ConversionToJava<Channel>().getAsGivenEntity(new String(userRecordsList.get(0).getData().array()), Channel.class);
ChannelRepository channelRepository = new ChannelRepositoryImpl();
insertChannelStatement = conn.prepareStatement(ChannelQuery.INSERT_CHANNEL);
deleteChannelStatement = conn.prepareStatement(ChannelQuery.DELETE_CHANNEL);
rejectChannelStatement = conn.prepareStatement(ChannelQuery.INSERT_REJET);
try {
if (channel.isDeleted()) {
channelRepository.deleteChannel(channel, deleteChannelStatement);
} else {
channelRepository.executeChannel(channel, insertChannelStatement);
}
conn.commit();
} catch (ChannelException e) {
log.error("an error has been occured during Channel insertion in teradata {}", e.getMessage());
handleChannelReject(conn, rejectChannelStatement, channel, e);
}
} catch (SQLException e) {
log.error(" an error has been occurred during connection opening Teradata Channel: {}", e.getMessage());
} finally {
try {
closeStatements(insertChannelStatement, deleteChannelStatement, rejectChannelStatement);
} catch (Exception e) {
log.error("an error has been occurred when closing Teradata channel connection : {}", e.getMessage());
}
}
}
}
public class ChannelRepositoryImpl implements ChannelRepository {
private static Logger log = (Logger) LoggerFactory.getLogger(Logger.ROOT_LOGGER_NAME);
private static String LOG_LEVEL = System.getenv("LOGGING_LEVEL");
#Override
public void executeChannel(Channel channel, PreparedStatement statement) {
log.setLevel(Level.valueOf(LOG_LEVEL));
final FeedChannelPreparedStatement feedStatement = new FeedChannelPreparedStatement();
try {
feedStatement.feedChannelPreparedSatementInsert(statement, channel);
statement.executeBatch();
} catch (Exception e) {
System.out.println("Insert Channel Exception : "+ e.getMessage());
log.error("an error has been occurred during Channel insertion in Teradata : {}", e.getMessage());
throw new ChannelException(e.getMessage());
}
}
#Override
public void deleteChannel(Channel channel, PreparedStatement statement) {
log.setLevel(Level.valueOf(LOG_LEVEL));
final FeedChannelPreparedStatement feedStatement = new FeedChannelPreparedStatement();
try {
System.out.println("Delete Channel : "+ channel.getEpgid());
feedStatement.feedChannelPreparedSatementDelete(statement, channel);
statement.executeBatch();
} catch (Exception e) {
System.out.println("Delete Channel Exception: "+ channel.getEpgid());
log.error("an error has been occurred during Channel deleting in Teradata : {}", e.getMessage());
throw new ChannelException(e.getMessage());
}
}
}
public class FeedChannelPreparedStatement extends ChannelPreparedStatement {
void feedChannelPreparedSatementInsert(PreparedStatement statement, Channel channel) throws SQLException {
List<ProductCodeTable> products = ofNullable(channel.getProducts()).orElse(emptyList());
if (!products.isEmpty()) {
for (ProductCodeTable product : products) {
if (product.getId() != null) {
addStatementBatch(statement, channel, product.getId());
}
}
} else {
addStatementBatch(statement, channel, null);
}
}
void feedChannelPreparedSatementDelete(PreparedStatement statement, Channel channel) throws SQLException {
addStatementBatchForDelete(statement, channel.getEpgid());
}
}
public class ChannelPreparedStatement {
private static final int EPGID_ID_INDEX = 1;
private static final int SERVICE_NAME_INDEX = 2;
private static final int PRODUCT_ID_INDEX = 3;
private static final int ID_PROVENANCE_INDEX = 4;
private static final int TMS_CHARGEMENT_INDEX = 5;
void addStatementBatch(PreparedStatement ps, Channel channel, String productId) throws SQLException {
ps.setObject(EPGID_ID_INDEX, Integer.toString(channel.getEpgid()), Types.VARCHAR);
ps.setObject(SERVICE_NAME_INDEX, channel.getServiceName(), Types.VARCHAR);
ps.setObject(PRODUCT_ID_INDEX, productId != null ? productId : "", Types.VARCHAR);
ps.setInt(ID_PROVENANCE_INDEX, 68);
ps.setTimestamp(TMS_CHARGEMENT_INDEX, FormatUtil.getTmsChargement());
ps.addBatch();
}
void addStatementBatchForDelete(PreparedStatement ps, Integer epjId) throws SQLException {
ps.setObject(EPGID_ID_INDEX, Integer.toString(epjId), Types.VARCHAR);
ps.addBatch();
}
}
I have a Spring Boot application where I created a POST method that sends data in a streaming fashion to the caller. Code below:
#RequestMapping(value = "/mapmatchstreaming", method = RequestMethod.POST)
public ResponseEntity<StreamingResponseBody> handleRequest(#RequestParam(value = "data", required = true) String data, #RequestParam(value = "mnr", required = true) Boolean mnr) {
logger.info("/mapmatchstreaming endpoint");
try {
Semaphore semaphore = new Semaphore(1);
ObjectMapper mapper = new ObjectMapper();
StreamingResponseBody responseBody = new StreamingResponseBody() {
#Override
public void writeTo (OutputStream outputStream) throws IOException {
// For each map
DataReader dataReader = new DataReader(data, "2020.06.011");
for(String mapRoot: dataReader.getMapsFolders()) {
dataReader = new DataReader(data, "2020.06.011");
DistributedMapMatcherStreaming distributedMapMatcher = new DistributedMapMatcherStreaming(dataReader.getTraces(), mapRoot, dataReader.getBoundingBox());
distributedMapMatcher.mapMatchBatch(new DistributedMapMatcherResult() {
#Override
public void onCorrectlyMapMatched(MapMatchedTrajectory mapMatchedTrajectory) {
try {
semaphore.acquire();
outputStream.write(mapper.writeValueAsString(mapMatchedTrajectory).getBytes());
outputStream.flush();
}
catch (Exception e) {
e.printStackTrace();
logger.error(String.format("Writing to output stream error: %s", e.getMessage()));
} finally{
semaphore.release();
}
}
});
}
}
};
return new ResponseEntity<StreamingResponseBody>(responseBody, HttpStatus.OK);
}
catch (Exception e) {
logger.error(String.format("Map-matching result ERROR: %s", ExceptionUtils.getStackTrace(e)));
return new ResponseEntity<StreamingResponseBody>(HttpStatus.BAD_REQUEST);
}
}
It works nicely, but the problem is that if multiple calls arrive to this method, all of them are run in parallel even if I have set server.tomcat.threads.max=1. In the non-streaming version, every next call waits for the current one to complete.
Is it possible to have blocking streaming calls in Spring? Thanks.
EDIT: I temporarily solved by using a global semaphore with only 1 permit, but I don't think this is the ideal solution.
I need to catch errors during authentication (like wrong parameters). I find nothing about it. I have isolted the procedure with threads. But with this bad way, the user can't understand what goes wrong
Below, my code:
public static boolean access(String db, String ip, String usr, String pwd){
Map<String, String> persistenceMap = new HashMap<>();
persistenceMap.put("hibernate.ogm.datastore.database", db);
persistenceMap.put("hibernate.ogm.datastore.host", ip);
persistenceMap.put("hibernate.ogm.datastore.username", usr);
persistenceMap.put("hibernate.ogm.datastore.password", pwd);
Thread mainThread = Thread.currentThread();
Thread logThread = new Thread(() -> {
Connection.EMF = Persistence.createEntityManagerFactory("ogm-jpa-mongo", persistenceMap);
Connection.EM = Connection.EMF.createEntityManager();
Connection.isOpen = true;
});
Thread timeOut = new Thread( () -> {
try{ Thread.sleep( 5000 ); }
catch(InterruptedException ex){ }
mainThread.interrupt();
});
logThread.start();
timeOut.start();
try{ logThread.join(); }
catch(InterruptedException ex){ return false; }
Connection.TM = com.arjuna.ats.jta.TransactionManager.transactionManager();
return Connection.isOpen;
}
The problem is that when I insert worng parameters, it is thrown a MongoSecurityException. But i can't catch it, I can only read it on the monitor-thread. Any ideas?
I believe this is a result of the way your version of Hibernate catches the MongoSecurityException. I believe the MongoSecurityException is caught inside a nested try catch block.
The correct answer here is to update your Hibernate version to the latest release. However, if you would like to see that exception I think you can do the following.
String message = "";
try {
logThread.join();
} catch(Throwable e) {
throw e;
} catch(Exception e) {
message = e.getMessage();
}
If that doesn't work you might be able to chain as follows.
String message = "";
try {
logThread.join();
} catch(Throwable e) {
e.getCause();
e.getCause().getCause();
e.getCause()..getCause().getCause();
}
I am using executorsevice in JAVA to execute some threads, let’s say ten threads, number of threads may vary. Each thread is executing a SQL server query. I am using Future and Callable classes to submit the tasks. I am getting the results [using future.get()] once each thread is finished.
Now my requirement is that I need to know the query which is executed by each thread once its result is returned, even if the result is an empty set.
Here is my code:
List<Future<List>> list = new ArrayList<Future<List>>();
int totalThreads = allQueriesWeight.size();
ExecutorService taskExecutor = Executors.newFixedThreadPool(totalThreads);
for (String query : allQueriesWeight) {//allQueriesWeight is an arraylist containing sql server queries
SearchTask searchTask = new SearchTask(query);
Future<List> submit = taskExecutor.submit(searchTask);
list.add(submit);
}
Here is my call function:
#Override
public List<SearchResult> call() throws Exception {
java.sql.Statement statement = null;
Connection co = null;
List<SearchResult> allSearchResults = new ArrayList();
try {
//executing query and getting results
while (r1.next()) {
...
allSearchResults.add(r);//populating array
}
} catch (Exception e) {
Logger.getLogger(GenericResource.class.getName()).log(Level.SEVERE, null, e);
} finally {
if (statement != null) {
statement.close();
}
if (co != null) {
co.close();
}
}
return allSearchResults;
}
Here is how I am getting the results:
for (Future<List> future : list) {
try {
System.out.println(future.get().size());
List<SearchResult> sr = future.get();
} catch (InterruptedException ex) {
Logger.getLogger(GenericResource.class.getName()).log(Level.SEVERE, null, ex);
} catch (ExecutionException ex) {
Logger.getLogger(GenericResource.class.getName()).log(Level.SEVERE, null, ex);
}
}
In this above for loop, I need to identify the query of which the result is returned. I am a newbie and any help/suggestion is highly appreciated.
Thanks.
Alternative 1:
You have both the lists in the same order and of same size, so you can simple do as below
for (int i = 0; i < allQueriesWeight.size(); i++) {
allQueriesWeight.get(i);
futureList.get(i);
}
Alternative 2:
If all the queries are different, you can use a map as shown below but this approach will lose the order of execution.
int totalThreads = allQueriesWeight.size();
Map<String,Future<List>> map = new HashMap<>;
ExecutorService taskExecutor = Executors.newFixedThreadPool(totalThreads);
for (String query : allQueriesWeight) {//allQueriesWeight is an arraylist containing sql server queries
SearchTask searchTask = new SearchTask(query);
Future<List> submit = taskExecutor.submit(searchTask);
map.put(query ,submit );
}
And then iterate the map
for (Entry<String,Future<List>> future : map.) {
System.out.println("query is:" +map.getKey());
List<SearchResult> sr = map.getValue().get();
}
Alternative 3
If you want to keep the order, create a class with Future and query as the attributes and then put that class in list
public class ResultWithQuery {
private final Future<List<?>> future;
private final String query;
public ResultWithQuery(Future<List<?>> future, String query) {
this.future = future;
this.query = query;
}
public Future<List<?>> getFuture() {
return future;
}
public String getQuery() {
return query;
}
}
And
List<ResultWithQuery > list = new ArrayList<ResultWithQuery >();
int totalThreads = allQueriesWeight.size();
ExecutorService taskExecutor = Executors.newFixedThreadPool(totalThreads);
for (String query : allQueriesWeight) {//allQueriesWeight is an arraylist containing sql server queries
SearchTask searchTask = new SearchTask(query);
Future<List> submit = taskExecutor.submit(searchTask);
list.add(new ResultWithQuery (submit, query));
}
And iterate the list
for (ResultWithQuery resQuery: list) {
try {
resQuery.getQuery();
List<SearchResult> sr = resQuery.getFuture.get();
} catch (InterruptedException ex) {
Logger.getLogger(GenericResource.class.getName()).log(Level.SEVERE, null, ex);
} catch (ExecutionException ex) {
Logger.getLogger(GenericResource.class.getName()).log(Level.SEVERE, null, ex);
}
}
I'm having trouble figuring how to deal with disconnections with hbc twitter api. The doc says I need slow down reconnect attempts according to the type of error experienced. Where do I get the type of error experienced? Is it in the msgQueue or the eventQueue or wherever?
#Asynchronous
#Override
public void makeLatestsTweets() {
msgList = new LinkedList<Tweet>();
BlockingQueue<String> msgQueue = new LinkedBlockingQueue<String>(100);
BlockingQueue<Event> eventQueue = new LinkedBlockingQueue<Event>(100);
Hosts hosebirdHosts = new HttpHosts(Constants.SITESTREAM_HOST);
StatusesFilterEndpoint hosebirdEndpoint = new StatusesFilterEndpoint();
userIds = addFollowings();
hosebirdEndpoint.followings(userIds);
Authentication hosebirdAuth = new OAuth1(CONSUMER_KEY, CONSUMER_SECRET,
TOKEN, SECRET);
ClientBuilder builder = new ClientBuilder().hosts(hosebirdHosts)
.authentication(hosebirdAuth).endpoint(hosebirdEndpoint)
.processor(new StringDelimitedProcessor(msgQueue))
.eventMessageQueue(eventQueue);
Client hosebirdClient = builder.build();
hosebirdClient.connect();
while (!hosebirdClient.isDone()) {
try {
String msg = msgQueue.take();
Tweet tweet = format(msg);
if (tweet != null) {
System.out.println(tweet.getTweetsContent());
msgList.addFirst(tweet);
if (msgList.size() > tweetListSize) {
msgList.removeLast();
}
caller.setMsgList(msgList);
}
} catch (InterruptedException e) {
hosebirdClient.stop();
e.printStackTrace();
} catch (JSONException e) {
e.printStackTrace();
}
}
}