DB team of our application has reported ASYNC_NETWORK_IO issue along with a big but optimized query that gets executed in around 36Sec and brings around 6,44,000 rows.
The main reasons behind this might be one of these:
A. Problem with the client application
B. Problems with the network – (but we have Ethernet speed 1 GB)
So, might be this is a code side issue because The session must wait for the client application to process the data received from SQL Server in order to send the signal to SQL Server that it can accept new data for processing.
This is a common scenario that may reflect bad application design, and is the most often cause of excessive ASYNC_NETWORK_IO wait type values
and here is the how we are getting data from DB in code.
try {
queue.setTickets(jdbcTemplate.query(sql, params, new QueueMapper()));
} catch (DataAccessException e) {
logger.error("get(QueueType queueType, long ticketId)", e);
}
Can anyone advise me on this?
Thanks in advance.
Related
I have a spring boot web application with the functionality to update an entity called StudioLinking. This entity describes a temporary, mutable, descriptive logical link between two IoT devices for which my web app is their cloud service. The Links between these devices are ephemeral in nature, but the StudioLinking Entity persists on the database for reporting purposes. StudioLinking is stored to the SQL based datastore in the conventional way using Spring Data/ Hibernate. From time to time this StudioLinking entity will be updated with new information from a Rest API. When that link is updated the devices need to respond (change colors, volume, etc). Right now this is handled with polling every 5 seconds but this creates lag from when a human user enters an update into the system and when the IoT devices actually update. It could be as little as a millisecond or up to 5 seconds! Clearly increasing the frequency of the polling is unsustainable and the vast majority of the time there are no updates at all!
So, I am trying to develop another Rest API on this same application with HTTP Long Polling which will return when a given StudioLinking entity is updated or after a timeout. The listeners do not support WebSocket or similar leaving me with Long Polling. Long polling can leave a race condition where you have to account for the possibility that with consecutive messages one message may be "lost" as it comes in between HTTP requests (while the connection is closing and opening, a new "update" might come in and not be "noticed" if I used a Pub/Sub).
It is important to note that this "subscribe to updates" API should only ever return the LATEST and CURRENT version of the StudioLinking, but should only do so when there is an actual update or if an update happened since the last checkin. The "subscribe to updates" client will initially POST an API request to setup a new listening session and pass that along so the server knows who they are. Because it is possible that multiple devices will need to monitor updates to the same StudioLinking entity. I believe I can acomplish this by using separately named consumers in the redis XREAD. (keep this in mind for later in the question)
After hours of research I believe the way to acomplish this is using redis streams.
I have found these two links regarding Redis Streams in Spring Data Redis:
https://www.vinsguru.com/redis-reactive-stream-real-time-producing-consuming-streams-with-spring-boot/
https://medium.com/#amitptl.in/redis-stream-in-action-using-java-and-spring-data-redis-a73257f9a281
I also have read this link about long polling, both of these links just have a sleep timer during the long polling which is for demonstration purposes but obviously I want to do something useful.
https://www.baeldung.com/spring-deferred-result
And both these links were very helpful. Right now I have no problem figuring out how to publish the updates to the Redis Stream - (this is untested "pseudo-code" but I don't anticipate having any issues implementing this)
// In my StudioLinking Entity
#PostUpdate
public void postToRedis() {
StudioLinking link = this;
ObjectRecord<String, StudioLinking> record = StreamRecords.newRecord()
.ofObject(link)
.withStreamKey(streamKey); //I am creating a stream for each individual linking probably?
this.redisTemplate
.opsForStream()
.add(record)
.subscribe(System.out::println);
atomicInteger.incrementAndGet();
}
But I fall flat when it comes to subscribing to said stream: So basically what I want to do here - please excuse the butchered pseudocode, it is for idea purposes only. I am well aware that the code is in no way indicative of how the language and framework actually behaves :)
// Parameter studioLinkingID refers to the StudioLinking that the requester wants to monitor
// updateList is a unique token to track individual consumers in Redis
#GetMapping("/subscribe-to-updates/{linkId}/{updatesId}")
public DeferredResult<ResponseEntity<?>> subscribeToUpdates(#PathVariable("linkId") Integer linkId, #PathVariable("updatesId") Integer updatesId) {
LOG.info("Received async-deferredresult request");
DeferredResult<ResponseEntity<?>> output = new DeferredResult<>(5000l);
deferredResult.onTimeout(() ->
deferredResult.setErrorResult(
ResponseEntity.status(HttpStatus.REQUEST_TIMEOUT)
.body("IT WAS NOT UPDATED!")));
ForkJoinPool.commonPool().submit(() -> {
//----------------------------------------------
// Made up stuff... here is where I want to subscribe to a stream and block!
//----------------------------------------------
LOG.info("Processing in separate thread");
try {
// Subscribe to Redis Stream, get any updates that happened between long-polls
// then block until/if a new message comes over the stream
var subscription = listenerContainer.receiveAutoAck(
Consumer.from(studioLinkingID, updateList),
StreamOffset.create(studioLinkingID, ReadOffset.lastConsumed()),
streamListener);
listenerContainer.start();
} catch (InterruptedException e) {
}
output.setResult("IT WAS UPDATED!");
});
LOG.info("servlet thread freed");
return output;
}
So is there a good explanation to how I would go about this? I think the answer lies within https://docs.spring.io/spring-data/redis/docs/current/api/org/springframework/data/redis/core/ReactiveRedisTemplate.html but I am not a big enough Spring power user to really understand the terminology within Java Docs (the Spring documentation is really good, but the JavaDocs is written in the dense technical language which I appreciate but don't quite understand yet).
There are two more hurdles to my implementation:
My exact understanding of spring is not at 100% yet. I haven't yet reached that a-ha moment where I really fully understand why all these beans are floating around. I think this is the key to why I am not getting things here... The configuration for the Redis is floating around in the Spring ether and I am not grasping how to just call it. I really need to keep investigating this (it is a huge hurdle to spring for me).
These StudioLinking are short lived, so I need to do some cleanup too. I will implement this later once I get the whole thing up off the ground, I do know it will be needed.
Why don't you use a blocking polling mechanism? No need to use fancy stuff of spring-data-redis. Just use simple blocking read of 5 seconds, so this call might take around 6 seconds or so. You can decrease or increase the blocking timeout.
class LinkStatus {
private final boolean updated;
LinkStatus(boolean updated) {
this.updated = updated;
}
}
// Parameter studioLinkingID refers to the StudioLinking that the requester wants to monitor
// updateList is a unique token to track individual consumers in Redis
#GetMapping("/subscribe-to-updates/{linkId}/{updatesId}")
public LinkStatus subscribeToUpdates(
#PathVariable("linkId") Integer linkId, #PathVariable("updatesId") Integer updatesId) {
StreamOperations<String, String, String> op = redisTemplate.opsForStream();
Consumer consumer = Consumer.from("test-group", "test-consumer");
// auto ack block stream read with size 1 with timeout of 5 seconds
StreamReadOptions readOptions = StreamReadOptions.empty().block(Duration.ofSeconds(5)).count(1);
List<MapRecord<String, String, String>> records =
op.read(consumer, readOptions, StreamOffset.latest("test-stream"));
return new LinkStatus(!CollectionUtils.isEmpty(records));
}
I am trying to execute a query which takes huge amount of time sometimes which causes closed connection, meaning the connection gets closed before the query executes/commits. I want to recover from the error, get a new connection and then retry.
Caused by: java.sql.SQLRecoverableException: Closed Connection
at oracle.jdbc.driver.OracleStatement.ensureOpen(OracleStatement.java:4051)
at oracle.jdbc.driver.OracleStatement.executeQuery(OracleStatement.java:1473)
at oracle.jdbc.driver.OracleStatementWrapper.executeQuery(OracleStatementWrapper.java:406)
at com.fimt.sat.testora12date.dao.DateSaverGetterDao.testAbandonedConnectionWithDS(DateSaverGetterDao.java:73)
You can try catch this particular error:
public save(MyData data) {
try {
...
} catch (SQLRecoverableException e) {
// Better handling a parameter to set
// the maximum number of retries
// Eventually consider to retry on a secondary thread
// delayed of a certain number of seconds
save(data);
}
}
The answer of #Davide Lorenzo MARINO is great unless you have a query heavy enough to not to manage to be executed even after such several recovers.
I'm not a professional in Oracle but what I've found is that you can tune some kind of RAC failover that will keep you transaction even after exceeding the timeout.
But generally my vision is that it is better to split data somehow in multiple queries
Better split / bucket the data in the select query and commit at some intervals. E.g if the query is for a duration of 2 months, bucket it / loop / split it 15 days of period and do the necessary statements and commit it
I have got the following error upon connection to DB and accessing the tables.
"An error was encountered performing the requested operation"
Closed Connection
Vendor code 17008.
Was able to resolve this after downloading the latest version of SQL Developer 19.2.1.247. Earlier older version was 3.x.
I currently have the following situation, which has bothered me for a couple of months right now.
The case
I have build a Java (FX) application which serves as a cash registry for my shop. The application contains a lot of classes (such as Customer, Customer, Transaction etc.), which are shared with the server API. The server API is hosted on Google App Engine.
Because we also have an online shop, I have chosen to build the cache of the entire database on startup of the application. To do this I call the GET of my Data API for each class/table:
protected QueryBuilder performGet(HttpServletRequest req, HttpServletResponse res)
throws ServletException, IOException, ApiException, JSONException {
Connection conn = connectToCloudSQL();
log.info("Parameters: "+Functions.parameterMapToString(req.getParameterMap()));
String tableName = this.getTableName(req);
log.info("TableName: "+tableName);
GetQueryBuilder queryBuilder = DataManager.executeGet(conn, req.getParameterMap(), tableName, null);
//Get the correct method to create the objects
String camelTableName = Functions.snakeToCamelCase(tableName);
String parsedTableName = Character.toUpperCase(camelTableName.charAt(0)) + camelTableName.substring(1);
List<Object> objects = new ArrayList<>();
try {
log.info("Parsed Table Name: "+parsedTableName);
Method creationMethod = ObjectManager.class.getDeclaredMethod("create"+parsedTableName, ResultSet.class, boolean.class);
while (queryBuilder.getResultSet().next()) {
//Create new objects with the ObjectManager
objects.add(creationMethod.invoke(null, queryBuilder.getResultSet(), false));
}
log.info("List of objects created");
creationMethod = null;
}
catch (Exception e) {
camelTableName = null;
parsedTableName = null;
objects = null;
throw new ApiException(e, "Something went wrong while iterating through ResultSet.", ErrorStatus.NOT_VALID);
}
Functions.listOfObjectsToJson(objects, res.getOutputStream());
log.info("GET Request succeeded");
//Clean up objects
camelTableName = null;
parsedTableName = null;
objects = null;
closeConnection(conn);
return queryBuilder;
}
It simples gets every row from the requested table in my Cloud SQL database. Then it creates the objects, with the class that is shared with the client application. Lastly, it converts these classes to JSON using GSON. Some of my tables have 10.000+ rows, and then it takes approx. 5-10 sec to do this.
At the client, I convert this JSON back to a list of objects by using the same shared class. First I load the essential classes sequentially (because else the application won't start), and after that I load the rest of the classes in the background with separate threads.
The problem
Every time I load up the cache, there is a chance (1 on 4) that the server responds with a DeadlineExceededException on some of the bigger tables. I believe this has something to do with Google App Engine not being able to fire up a new instance in time, and therefore the computation time exceeds the limit.
I know it has something to do with loading the objects in background threads, because these all start at the same time. When I delay the start of these threads with 3 seconds, the error occurs a lot less, but is still present. Because the application loads 15 classes in the background, delaying them is not ideal because the application will only work partly until it is done. It is also not an option to load everything before starting, because this will take more than 2 minutes.
Does anyone know how to set up some load balancing on Google App Engine for this? I would like to solve this server side.
You clearly have an issue with warm up requests and a query that takes quite long. You have the usual options:
Do some profiling and reduce the cost of your method invocations
use caching (memcache) to cache some of the result
If those options don't work for you, you should parallelize your computations. One thing that comes to my mind is that you could reliably reduce request times if you simply split your request into multiple parallel requests like so:
Let's say your table contains 5k rows.
Then you create 50 requests with each handleing 100 rows.
Aggregate the results on server or client side and respond
It'll be quite tough to do this on just the server side but it should be possible if your now (much) smaller taks return within a couple of seconds.
Alternatively you could return a job id at once and make the client poll for the result in a couple of seconds. This would however require a small change on the client side. It's the better option though imho, especially if you want to use a task queue for creating your response.
I'm currently in the progress of working on a telephone application, and for easy portability I'm making use of Unity3D. The application's design looks nice and crisp and scales well into all of my target resolutions, but the networking is giving me an issue.
I'm using Java for the server backend, and I'm using JDBC to mananage database connections. The problem is that this application is sure to have a few thousand users at minimum (Based on my current following, blogs, and marketing techniques) and I would like to make sure that I'm doing this correctly to avoid any lockups from SQL being used at the same time.
This application pulls everything that is needed from the database. For security and glitch prevention reasons if the database cannot be connected the server lets the client(application) know that there was an error.
Here's what I'm worried about: When a user logs in a few things are done almost instantly. The database is checked for their login credentials, if it's successful the client then loads the next stage of the application then sends a packet to the server. The server then grabs more information from the server (This is done through a total of three queries in the shortest form I can possibly come up with.) However; What happens if a 2-300 people are logging in, 3-400 are spending tokens (Requires db calls) and 2-300 are requesting database data elsewhere. That's around 1,000 requested database calls coming in.
I don't want the application to seem really laggy.
Here's how I'm currently handling it after a little research, but it doesn't feel necessarily right. Looking for advice and corrections. (decodeTCP is called when a packet with the header id of X is received.)
public void decodeTcp(Session session) throws IOException {
try {
ScheduledExecutorService scheduledExecutorService = Executors.newScheduledThreadPool(1);
ScheduledFuture<?> scheduledFuture = scheduledExecutorService.schedule(new Callable<Object>() {
public Object call() throws Exception {
return Database.getSingleton().fetch((User)session.getAttachment());
}
}, 0, TimeUnit.SECONDS);
int results = (int) scheduledFuture.get();scheduledFuture.get());
} catch (InterruptedException | ExecutionException e) {
e.printStackTrace();
}
}
Where Database#fetch(Object) returns an int.
I am trying to write a mechanism that will manage my save to DB operation.
I send the server a list of objects, it iterates them and saves each one.
Now, if they fail for some strange reason (exception) it saves them to another list that
has a timer that runs every 5 seconds, and tries to re-save them.
I then have a locking problem, which I can solve with another boolean.
My function that saves my lost object is:
private void saveLostDeals() {
synchronized (unsavedDeals) {
if (unsavedDeals.size() > 0) {
for (DealBean unsavedDeal : unsavedDeals) {
boolean successfullySaved = reportDeal(unsavedDeal,false);
if (successfullySaved) {
unsavedDeals.remove(unsavedDeal);
}
}
}
}
}
And my reportDeal() method that's being called for regular reports and for lost deals report:
try {
...
} catch (HibernateException e) {
...if (fallback)
synchronized (unsavedDeals) {
unsavedDeals.add(deal);
}
session.getTransaction().rollback();
} finally {
....
}
Now, when a lost deal is saved - if an exception occurs - the synchronized block will stop it.
What do you have to say about this save fallback mechanism? Are there better design patterns to deal with this common issue?
I would suggest using either a proxy or aspects to handle the rollback/retry mechanism. The proxy could use something like the strategy pattern for advice on what action to take.
If you however don't want to retry immediately, but say in 5 seconds as you propose, I would consider building that into the contract of your database layer by providing asynchronous routines to begin with. Something like dao.scheduleStore(o); or dao.asyncStore(o);.
It depends
For example,
Request to save Entity ---> Exception occurs ---> DB Connection Problem----> In Exception Block Retry to save entity in fallback DB-----> Return the response to request
Request to save Entity ---> Exception occurs ---> DB Connection Problem----> In Exception Block Retry to save entity in in-memory Store of Application -----> Return the response to request
Request to save Entity ----> Exception occurs----> unknown Exception----> In Exception block save entity to XML File store[serialize in XML]---->Return the response mentioning temp saved will be updated later to request
Timer ----> checks the file store for any serialized XML ----> updates the DB
Points to watch out for
Async calls are better in such scenarios rather than making requesting client to wait.
In case of in-memory saving , watch out for amount of data saved in memory in case of prolonged DB failure. That might bring down the whole application
Transactions, whether you want to rollback of save its intermittent state.
consistency of data to be watched for