We are trying to use MYSQL's locking function Get_Lock and Release_Lock in our spring java application . The code we are using is defined in a procedure for both seperately which simply invoke from our java code .The queries inside the proc are given below .
We have been monitoring the execution time of these functions and found that at times it takes a millisecond to execute but at other times , this takes around 400 to 600 ms . I have tried the following approaches but there hasn't been much of a difference :
1. Use "Do" in place of select with these functions .
2. Using an int data type of the key which we are using as lock string .
3. Decreasing the length of lock string .
I am using a timeout of 0 to avoid connections being locked.
Can anyone please suggest me a way to optimize this .Is there a way of optimizing innodb buffer pool or something related to these configurations .
Please do let me know if any other input is required from my end .
Please find below some proc code and stats for your reference .
Current Mysql Code :
Proc :
get_Name_lock:
-- Using select
Select get_lock(Name,0) into c_Name_flag;
-- Using Do
Do get_lock(Name,0) ;
Proc :
release_Name_lock :
-- Using select
Select release_lock(Name) into c_Name_flag;
-- Using Do
Do release_lock(Name);
Request rate : (around)10 requests/sec .
Mysql Version : 5.7.19-log
Related
I have implemented a stored procedure that generates a csv report based on the data in a transaction_table, and stores the generated report in report_table for future references.
I execute and pass arguments to this procedure using JPA in a java program and it works perfectly fine.Problems are:
since we have huge amount of transaction data in transaction_table, it takes some time for report to be generated. And during this time the the pop-up thread responsible for generating the report is blocked.
If database connection for running the procedure gets broken in the middle of execution, not even we don't get the report , but also the database thread responsible for handling the request does not get completed and remains in the memory in some unknown state.So we need an active connection with database during execution time.
My questions are:
Is there any way to call procedure and return immediately, without having a thread in the application blocked for entire execution time of stored procedure.
Since there is a chance of losing the database connection, is there any way that database can run the procedure independently from the application that is calling it, so that it gets completed even in the absence active connection.
Note that I need to pass the report parameters from application to the procedure.
I have Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production, running on the server.
I did come up with a solution for this problem. so I decided to share it, in case you have similar problem.
First all let me further explain the problem, and then share the solution.
problem was: I am using a connection pool in JPA to connect to database and I use JPA annotations to execute a procedure on database (I execute the procedure in a separate thread in application side). The query is dealing with transactions for generating a massive report, so it takes some time to be executed. When, for whatever reason, the database connection that has been obtained from the pool gets broken, not even that the database procedure does not get completed, but also it does not fail so that at least it frees up the resources that it has in hand.
Solution:
Short answer is: I created a another procedure (wrapper procedure) that creates and starts dbms_schedule job (with some random name) that runs a dbms_schedule program which runs the main procedure. Since the wrapper procedure finishes in matter of milliseconds, It does not block the db connection to long, so that it can fail.
Long answer:
Step 1: creating the program.
BEGIN
DBMS_SCHEDULER.create_program(
program_name => 'DBUSER.PROG_NAME',
program_action => 'DBUSER.MAIN_REPORT',
program_type => 'STORED_PROCEDURE',
number_of_arguments => 1, //number of passed arguments to procedure
comments => NULL,
enabled => FALSE);
//Do this for each argument
DBMS_SCHEDULER.define_program_argument(
program_name => 'DBUSER.PROG_NAME',
argument_name => NULL,
argument_position => 1,
argument_type => 'VARCHAR2',
out_argument => FALSE);
passing procedure arguments
DBMS_SCHEDULER.ENABLE(name=>'DBUSER.PROG_NAME');
END;
Step 2: create the wrapper procedure.
create or replace PROCEDURE WRAPPER_PROC
(
FIRST_ARG IN VARCHAR2
)
IS
job_name_var VARCHAR2(20);
BEGIN
//creating a random job-name
select DBMS_SCHEDULER.generate_job_name ('TEMP_JOB_') INTO job_name_var from dual;
//creating the job
dbms_scheduler.create_job(job_name => job_name_var ,
program_name => 'PROG_NAME',
start_date => systimestamp,
auto_drop => true,
repeat_interval => null,
end_date => null);
//passing the argument to job
dbms_scheduler.set_job_argument_value(job_name_var, 1, FIRST_ARG);
//specifying the the dbms should drop the job after it has run
dbms_scheduler.set_attribute(job_name_var,'max_runs',1);
dbms_scheduler.enable(job_name_var);
DBMS_OUTPUT.put_line('Job has successfully created');
END WRAPPER_PROC;
hope it helps!
I have a PL/SQL function that is called from our Java code.
I have the SQL_ID of the PL/SQL function execution and I have access to V$ views on my read-only DB user. The query takes quite some time to execute? Is there a way to profile the PL/SQL function execution to check where exactly the execution is stuck?
I know how to do this for SQL queries with V$SQL, V$ACTIVE_SESSION_HISTORY and V$SESSION_LONGOPS, but I am unable to figure out how to do this for PL/SQL code.
The PL/SQL function takes 4 minutes to execute, so I can execute quite a few V$ queries manually in that time. What V$ views should I check to find a line in the execution plan/function? Is this even possible?
Maybe you can use DBMS_PROFILER for your problem. But if you want to use this method, you have to install some infrastructure.
I don't want to describe the process how to install "proftab.sql", this link shows how it works.
It also shows some quick examples how to trace a specific function.
I can provide my version of the analyze-query after some testing:
select ppr.runid,
ppr.run_comment1,
decode(nvl(ppd.total_occur, 0), 0, 0, ppd.total_time / ppd.total_occur / 1000000) as Avg_msek,
ppd.total_time / 1000000 totaltime_msek,
ppd.total_occur,
uc.name,
uc.line,
uc.text as Source
from plsql_profiler_data ppd, plsql_profiler_units ppu, user_source uc, plsql_profiler_runs ppr
where ppd.runid = ppu.runid
and ppu.runid = ppr.runid
-- and ppr.run_comment1 = 'MYTEST' --show only a specific testrun
-- and ppr.runid = (select max(runid) from plsql_profiler_runs) /*to get the last run*/
and ppd.unit_number = ppu.unit_number
and ppu.unit_name = uc.name
and ppd.line#(+) = uc.line
and uc.type in ('PACKAGE BODY', 'TYPE BODY')
--order by uc.name, uc.line; --Show all code by line
--order by totaltime_msek desc; --Sort by slowest lines
order by total_occur desc, avg_msek desc --Sort by calls and slowest ones
this is a newo4j rest api call related error - from my java code I'm making a REST API call to a remote Neo4J Database by passing query and parameters, the query being executed is as below
*MERGE (s:Sequence {name:'CommentSequence'}) ON CREATE SET s.current = 1 ON MATCH SET s.current=s.current+1 WITH s.current as sequenceCounter MERGE (cmnt01:Comment {text: {text}, datetime:{datetime}, type:{type}}) SET cmnt01.id = sequenceCounter WITH cmnt01 MATCH (g:Game {game_id:{gameid}}),(b:Block {block_id:{bid}, game_id:{gameid}}),(u:User {email_id:{emailid}}) MERGE (b)-[:COMMENT]->(cmnt01)<-[:COMMENT]-(u)*
Basically this query is generating a sequence number at run time and sets the 'CommentId' property of the Comment Node as this Sequence number before attaching the comment node to a Game's block i.e. For every comment added by the user I'm adding a sequence number as it's id.
This is working for almost 90% of the cases but there are couple of cases in a day when it fails with below error
ERROR com.exectestret.dao.BaseGraphDAO - Query execution error:**Error reading as JSON ''**
Why does the Neo4J Query not return any proper error code ? It just says error reading as JSON ''.
Neo4J Version is
Neo4j Community Edition 2.2.1
Thanks,
Deepesh
It gets HTML back and can't read it as JSON, but should output the failure HTML can you check the log output for that and share it too?
Also check your graph.db/messages.log and data/log/console.log for any error messages.
I'm using MySQL 5.1, Apache Tomcat 7, MyBatis 3.1
I have a method with code like this:
for( Order o : orders) {
List<Details> list = getDetails(o);
//Create PDF report ...
}
Where getDetails is a method that executes a stored procedure that takes some time to execute ( 1 to 2 seconds), The problem here is that I have many orders (near 4000) and I need to execute this method to process every order, and when I hit that method, the CPU usage of the MySQL process goes up to 90 - 100%
Is that normal?, Do I need to use Thread.sleep() after getDetails if executed?, Or do I need to do some modifications to my query?,
I use sybase database and am trying to update some values into the database.
While trying to run this it throws an exception as :
com.sybase.jdbc2.jdbc.SybSQLException: The identifier that starts with 'WeeklyStudentEventClassArchiv' is too long. Maximum length is 30.
This table is in another database and thus i have to use the database name along with the table name as dhown below:
StudActive..WeeklyStudentEventClassArchiv which apparently exceeds 30 characters.
I have to use the databasename..tablename in the stored procudure but its throwing an exception.
This happens even if i physically embed the sql in the java code.
How can this be solved.
The Stored Procedue is as shown:
create proc dbo.sp_getStudentList(
#stDate int,
#endDate int
)
as
begin
set nocount on
select distinct studCode
StudActive..WeeklyStudentEventClassArchive
where studCode > 0
and courseStartDate between #stDate and #endDate
end
StudActive..WeeklyStudentEventClassArchiv which apparently exceeds 30
characters.
Yes - I count 41.
Rename the table and/or the stored proc and you should be fine. It wounds like a limitation of either the JDBC driver or the database.
Your JDBC driver is out of date. Updating to a later version might help solve your problem.
First download a more recent jConnect driver from the Sybase website. Then update your code to use the new driver package. You will also need to change your code, as the package name of the driver changes for each new version of the specification. (The current package is com.sybase.jdbcx...)
Take a look at the programmers reference for more information.