Semester wise results and display cgpa count by range - java

How to calculate grade wise counts by semester and grade range.
I am having tables, fields and data as follows:
SEMETSER
SEMISTER_NO
NAME
1
FIRST
2
SECOND
STUDENT_SEMETSER
STUDENT_NO
SEMESTER_NO
SCORE
1
1
9.8
2
1
9.2
3
1
8.8
4
1
7.8
5
1
7.5
6
1
7.2
7
2
8.8
8
2
8.2
9
2
8.8
10
2
6.8
11
2
6.5
12
2
6.2
RESULTS Expected:
SEMESTER_NO
SCORE 10-9
SCORE 9-8
8-7
7-6
6-5
5-4
1
2
1
3
0
0
0
2
0
3
0
2
0
0

Using CASE-WHEN and PIVOT is one of the options:
SELECT *
FROM
(
SELECT semester_no,
CASE
WHEN SCORE>= 9 and SCORE<=10 THEN 'Score 10-9'
WHEN SCORE>= 8 and SCORE<9 THEN 'Score 9-8'
WHEN SCORE>= 7 and SCORE<8 THEN 'Score 8-7'
WHEN SCORE>= 6 and SCORE<7 THEN 'Score 7-6'
WHEN SCORE>= 5 and SCORE<6 THEN 'Score 6-5'
WHEN SCORE>= 4 and SCORE<5 THEN 'Score 5-4'
END AS score_cat
FROM student_semester
)
PIVOT
(
COUNT(score_cat)
FOR score_cat IN ('Score 10-9','Score 9-8','Score 8-7','Score 7-6','Score 6-5','Score 5-4')
)
ORDER BY semester_no
The CASE statement will categorize the score into different buckets (Modify as per your need) and PIVOT statement will transpose the data into desired level.
DEMO

Related

How to calculate Statistics for orders group by year in MYSQL?

Order table.
id
created_date
price
1
2021-09-01
100
2
2021-09-01
50
3
2020-09-01
50
4
2020-09-01
150
5
2019-09-01
100
Order details table:-
id
order_id
product_quantity
1
1
10
2
2
20
3
1
30
4
3
10
5
3
20
6
4
30
7
5
20
So I need a MySQL query to get stats about order count and price and product quantity for each year.
The expected table
year(created_date)
orders_count
product_quantity_in_orders
price
2021
2
60
150
2020
2
60
150
2019
1
20
100
Please try this.
SELECT YEAR(o.created_date) year_created
, COUNT(o.id) orders_count
, SUM(t.product_quantity) product_quantity_in_orders
, SUM(o.price) price
FROM orders o
INNER JOIN (SELECT order_id
, SUM(product_quantity) product_quantity
FROM order_details
GROUP BY order_id) t
ON o.id = t.order_id
GROUP BY YEAR(o.created_date)

Using H2OApi Java bindings to retrieve H2O Frame summary ends with MalformedJsonException

I am developing a Spring Boot Application with H2O 3.20.0.10 and need a summary / metadata of all columns in a frame. While invoking frameSummary() of H2oApi, I get an MalformedJsonException:
Caused by: com.google.gson.stream.MalformedJsonException: JSON forbids NaN and infinities: NaN at line 1 column 23937 path $.frames[0].columns[6].mean
at com.google.gson.stream.JsonReader.nextDouble(JsonReader.java:912)
at com.google.gson.Gson$1.read(Gson.java:319)
at com.google.gson.Gson$1.read(Gson.java:313)
at com.google.gson.internal.bind.ReflectiveTypeAdapterFactory$1.read(ReflectiveTypeAdapterFactory.java:131)
at com.google.gson.internal.bind.ReflectiveTypeAdapterFactory$Adapter.read(ReflectiveTypeAdapterFactory.java:222)
at com.google.gson.internal.bind.TypeAdapterRuntimeTypeWrapper.read(TypeAdapterRuntimeTypeWrapper.java:41)
at com.google.gson.internal.bind.ArrayTypeAdapter.read(ArrayTypeAdapter.java:72)
at com.google.gson.internal.bind.ReflectiveTypeAdapterFactory$1.read(ReflectiveTypeAdapterFactory.java:131)
at com.google.gson.internal.bind.ReflectiveTypeAdapterFactory$Adapter.read(ReflectiveTypeAdapterFactory.java:222)
at com.google.gson.internal.bind.TypeAdapterRuntimeTypeWrapper.read(TypeAdapterRuntimeTypeWrapper.java:41)
at com.google.gson.internal.bind.ArrayTypeAdapter.read(ArrayTypeAdapter.java:72)
at com.google.gson.internal.bind.ReflectiveTypeAdapterFactory$1.read(ReflectiveTypeAdapterFactory.java:131)
at com.google.gson.internal.bind.ReflectiveTypeAdapterFactory$Adapter.read(ReflectiveTypeAdapterFactory.java:222)
at retrofit2.converter.gson.GsonResponseBodyConverter.convert(GsonResponseBodyConverter.java:37)
at retrofit2.converter.gson.GsonResponseBodyConverter.convert(GsonResponseBodyConverter.java:25)
at retrofit2.ServiceMethod.toResponse(ServiceMethod.java:116)
at retrofit2.OkHttpCall.parseResponse(OkHttpCall.java:211)
at retrofit2.OkHttpCall.execute(OkHttpCall.java:174)
at water.bindings.H2oApi.frameSummary(H2oApi.java:3718)
Column at position 23937 is of type enum and has therefore no mean value.
Update MWE:
new H2oApi("http://localhost:54321").frameSummary(key);
the response from H2O is something like:
__meta
schema_version 3
schema_name "FramesV3"
schema_type "Frames"
_exclude_fields ""
row_offset 0
row_count -1
column_offset 0
full_column_count -1
column_count -1
job null
frames
0
__meta
schema_version 3
schema_name "FrameV3"
schema_type "Frame"
_exclude_fields ""
frame_id
__meta
schema_version 3
schema_name "FrameKeyV3"
schema_type "Key<Frame>"
name "TrainR"
type "Key<Frame>"
URL "/3/Frames/TrainR"
byte_size 48018
is_text false
row_offset 0
row_count 100
column_offset 0
column_count 10
full_column_count 10
total_column_count 10
checksum 5296931134826174000
rows 1233
num_columns 10
default_percentiles […]
columns
0 {…}
1 {…}
2 {…}
3 {…}
4 {…}
5 {…}
6
__meta
schema_version 3
schema_name "ColV3"
schema_type "Vec"
label "weekday"
missing_count 0
zero_count 176
positive_infinity_count 0
negative_infinity_count 0
mins […]
maxs […]
mean "NaN"
sigma "NaN"
type "enum"
domain […]
domain_cardinality 7
data […]
string_data null
precision -1
histogram_bins […]
histogram_base 0
histogram_stride 1
percentiles […]
7 {…}
8 {…}
9 {…}
compatible_models null
Is there a workaround for that?
I was able to fix the problem. Gson usually does not like "NaN" of infinities for primitive values (or the object as well?)
the fix was to set lenient when configuring Gson.
I also created a pull request: https://github.com/h2oai/h2o-3/pull/2962

Decrement the previous data count, if present sending data is same as previous

Suppose i have list of data(oracle MY_SQL) for example
State RPD Mac_address total_Mac_Online_Count.
26 AA aa:bb:12:cc:ab:aa 1
26 BB aa:bb:12:cc:ab:ab 1
26 CC aa:bb:12:cc:ab:ac 1
26 DD aa:bb:12:cc:ab:ad 1
26 EE aa:bb:12:cc:ab:bb 1
Herewith, I'm sending data like this to db
26 AB aa:bb:12:cc:ab:ac
Mac_address is same as for RPD "CC". Now total_Mac_Online_Count should decrease by one for RPD "CC"
26 CC aa:bb:12:cc:ab:ac 0
and for new data it should add normally like
26 AB aa:bb:12:cc:ab:ac 1
Thanks in advance.
Something like this?
Test case:
SQL> create table mac
2 (state number,
3 rpd varchar2(2),
4 mac_address varchar2(20) constraint uk_mac unique,
5 total_count number
6 );
Table created.
SQL> insert into mac
2 select 26, 'aa', 'aa:bb:12:cc:ab:aa', 1 from dual union
3 select 26, 'bb', 'aa:bb:12:cc:ab:ab', 1 from dual union
4 select 26, 'cc', 'aa:bb:12:cc:ab:ac', 1 from dual;
3 rows created.
A procedure: I'm selecting the whole row into a local variable. As you might get NO_DATA_FOUND, I'm handling it (by doing nothing). If there's a match, I'm subtracting the TOTAL_COUNT column value by 1 for the same MAC address row and inserting the whole new row.
However, it seems that this model supports duplicate MAC addresses. Is that legal? What do you want to do in such cases? For example, if you enter the same MAC address once again? SELECT will return TOO_MANY_ROWS which should be handled. One option is to update only one (which one?) row; or ...?
SQL> create or replace procedure p_mac
2 (par_state in number, par_rpd in varchar2, par_mac_address in varchar2)
3 is
4 l_row mac%rowtype;
5 begin
6 select *
7 into l_row
8 from mac
9 where mac_address = par_mac_address;
10
11 update mac set
12 total_count = total_count - 1
13 where mac_address = par_mac_address;
14
15 insert into mac (state, rpd, mac_address, total_count)
16 values
17 (par_state, par_rpd, par_mac_address, 1);
18 exception
19 when no_data_found then null;
20 end;
21 /
Procedure created.
Testing:
SQL> select * from mac;
STATE RP MAC_ADDRESS TOTAL_COUNT
---------- -- -------------------- -----------
26 aa aa:bb:12:cc:ab:aa 1
26 bb aa:bb:12:cc:ab:ab 1
26 cc aa:bb:12:cc:ab:ac 1
SQL> exec p_mac(26, 'ab', 'aa:bb:12:cc:ab:ac');
PL/SQL procedure successfully completed.
SQL> select * From mac;
STATE RP MAC_ADDRESS TOTAL_COUNT
---------- -- -------------------- -----------
26 aa aa:bb:12:cc:ab:aa 1
26 bb aa:bb:12:cc:ab:ab 1
26 cc aa:bb:12:cc:ab:ac 0
26 ab aa:bb:12:cc:ab:ac 1
SQL>

Annealing on a multi-layered neural network: XOR experiments

Im begineer in this concept and what I have tried to learn for a feed-forward type neural network(topology of 2x2x1 ):
Bias and weight range of each neuron_____________Outputs for XOR test inputs
[-1,1] 1,1 ----> 0,9
1,0 ----> 0,8
0,1 ---->-0.1
0,0 ----> 0.1
[-10,10] 1,1 ----> 0,24
1,0 ----> 0,67
0,1 ---->-0.54
0,0 ----> 0.10
[-4,4] 1,1 ----> -0,02
1,0 ----> 0,80
0,1 ----> 0.87
0,0 ----> -0.09
So, the range of [-4,4] seems to be better than other.
Question: Is there a way to find the proper limits of weigths and biases compared to temperature limits and temperature decrease rate?
Note: Im trying two ways here. First is randomizing all weights and biases at once for each trial. Second is randomizing only single weight and a single bias at each trial. (50 iterations before decreasing temperature). Single weight change gives worse results.
(n+1) is next value, (n) is the value before
TempMax=2.0
TempMin=0.1 ----->approaching to zero, error of XOR output approaches to zero too
Temp(n+1)=Temp(n)/1.001
Weight update:
w(n+1)=w(n)+(float)(Math.random()*t*2.0f-t*1.0f)); // t is temperature
(same for bias update)
Iterations per temperature=50
Using java's Math.random() method(Spectral property is appropriate for annealing?)
Transition probability:
(1.0f/(1.0f+Math.exp(((candidate state error)-(old error))/temp)))
Neuron activation function: Math.tanh()
Tried many times and results are nearly same. Is reannealing the only solution to evade deeper local minimums?
I need a suitable weight/bias range/limit according to total neuron number and layer number and starting/enging temperature. 3x6x5x6x1 can count 3-bit input and gives outpu, can approximate step function, but I need to play with ranges always.
For this training data set, output error is too big(193 data points, 2 inputs, 1 output):
193 2 1
0.499995 0.653846
1
0.544418 0.481604
1
0.620200 0.320118
1
0.595191 0.404816
0
0.404809 0.595184
1
0.171310 0.636142
0
0.014323 0.403392
0
0.617884 0.476556
0
0.391548 0.478424
1
0.455912 0.721618
0
0.615385 0.500005
0
0.268835 0.268827
0
0.812761 0.187243
0
0.076923 0.499997
1
0.769231 0.500006
0
0.650862 0.864223
0
0.799812 0.299678
1
0.328106 0.614848
0
0.591985 0.722088
0
0.692308 0.500005
1
0.899757 0.334418
0
0.484058 0.419839
1
0.200188 0.700322
0
0.863769 0.256940
0
0.384615 0.499995
1
0.457562 0.508439
0
0.515942 0.580161
0
0.844219 0.431535
1
0.456027 0.529379
0
0.235571 0.104252
0
0.260149 0.400644
1
0.500003 0.423077
1
0.544088 0.278382
1
0.597716 0.540480
0
0.562549 0.651021
1
0.574101 0.127491
1
0.545953 0.731052
0
0.649585 0.350424
1
0.607934 0.427886
0
0.499995 0.807692
1
0.437451 0.348979
0
0.382116 0.523444
1
1 0.500000
1
0.731165 0.731173
1
0.500002 0.038462
0
0.683896 0.536585
1
0.910232 0.581604
0
0.499998 0.961538
1
0.903742 0.769772
1
0.543973 0.470621
1
0.593481 0.639914
1
0.240659 0.448408
1
0.425899 0.872509
0
0 0.500000
0
0.500006 0.269231
1
0.155781 0.568465
0
0.096258 0.230228
0
0.583945 0.556095
0
0.550746 0.575954
0
0.680302 0.935290
1
0.693329 0.461550
1
0.500005 0.192308
0
0.230769 0.499994
1
0.721691 0.831791
0
0.621423 0.793156
1
0.735853 0.342415
0
0.402284 0.459520
1
0.589105 0.052045
0
0.189081 0.371208
0
0.533114 0.579952
0
0.251594 0.871762
1
0.764429 0.895748
1
0.499994 0.730769
0
0.415362 0.704317
0
0.422537 0.615923
1
0.337064 0.743842
1
0.560960 0.806496
1
0.810919 0.628792
1
0.319698 0.064710
0
0.757622 0.393295
0
0.577463 0.384077
0
0.349138 0.135777
1
0.165214 0.433402
0
0.241631 0.758362
0
0.118012 0.341772
1
0.514072 0.429271
1
0.676772 0.676781
0
0.294328 0.807801
0
0.153846 0.499995
0
0.500005 0.346154
0
0.307692 0.499995
0
0.615487 0.452168
0
0.466886 0.420048
1
0.440905 0.797064
1
0.485928 0.570729
0
0.470919 0.646174
1
0.224179 0.315696
0
0.439040 0.193504
0
0.408015 0.277912
1
0.316104 0.463415
0
0.278309 0.168209
1
0.214440 0.214435
1
0.089768 0.418396
1
0.678953 0.767832
1
0.080336 0.583473
1
0.363783 0.296127
1
0.474240 0.562183
0
0.313445 0.577267
0
0.416055 0.443905
1
0.529081 0.353826
0
0.953056 0.687662
1
0.534725 0.448035
1
0.469053 0.344394
0
0.759341 0.551592
0
0.705672 0.192199
1
0.385925 0.775385
1
0.590978 0.957385
1
0.406519 0.360086
0
0.409022 0.042615
0
0.264147 0.657585
1
0.758369 0.241638
1
0.622380 0.622388
1
0.321047 0.232168
0
0.739851 0.599356
0
0.555199 0.366750
0
0.608452 0.521576
0
0.352098 0.401168
0
0.530947 0.655606
1
0.160045 0.160044
0
0.455582 0.518396
0
0.881988 0.658228
0
0.643511 0.153547
1
0.499997 0.576923
0
0.575968 0.881942
0
0.923077 0.500003
0
0.449254 0.424046
1
0.839782 0.727039
0
0.647902 0.598832
1
0.444801 0.633250
1
0.392066 0.572114
1
0.242378 0.606705
1
0.136231 0.743060
1
0.711862 0.641568
0
0.834786 0.566598
1
0.846154 0.500005
1
0.538462 0.500002
1
0.379800 0.679882
0
0.584638 0.295683
1
0.459204 0.540793
0
0.331216 0.430082
0
0.672945 0.082478
0
0.671894 0.385152
1
0.046944 0.312338
0
0.499995 0.884615
0
0.542438 0.491561
1
0.540796 0.459207
1
0.828690 0.363858
1
0.785560 0.785565
0
0.686555 0.422733
1
0.231226 0.553456
1
0.465275 0.551965
0
0.378577 0.206844
0
0.567988 0.567994
0
0.668784 0.569918
1
0.384513 0.547832
1
0.288138 0.358432
1
0.432012 0.432006
1
0.424032 0.118058
1
0.296023 0.703969
1
0.525760 0.437817
1
0.748406 0.128238
0
0.775821 0.684304
1
0.919664 0.416527
0
0.327055 0.917522
1
0.985677 0.596608
1
0.356489 0.846453
0
0.500005 0.115385
1
0.377620 0.377612
0
0.559095 0.202936
0
0.410895 0.947955
1
0.187239 0.812757
1
0.768774 0.446544
0
0.614075 0.224615
0
0.350415 0.649576
0
0.160218 0.272961
1
0.454047 0.268948
1
0.306671 0.538450
0
0.323228 0.323219
1
0.839955 0.839956
1
0.636217 0.703873
0
0.703977 0.296031
0
0.662936 0.256158
0
0.100243 0.665582
1
I highly doubt that any strict rules exist for your problem. First of all, limits/bounds of weights are strictly dependant on your input data representation, activation functions, neurons number and output function. what you can rely on here are rules of the thumb in the best possible scenario.
First, lets consider the initial weights values in classical algorithms. Some basic idea of the weights scale are to use them in the range of [-1,1] for small layers, and for large ones divide it by the square root of the number of units in the large layer. More sophisticated methods are described by Bishop (1995). With such rule of the thumb we could deduce, that a resonable range (which is simply row of magniture bigger then the initial guess) would be something in the form of [-10,10]/sqrt(neurons_count_in_the_lower_layer).
Unfortunately, to my best knowledge, temperature choice is much more complex, as it is rather a data dependant factor, not just topology based one. In some papers there have been suggestions for some values for some specific time series prediction, but nothing general. In simmulated annleaing "in general" (not just applied to NN training), there have been proposed many heuristic choices, ie.
If we know the maximum distance (cost function difference) between one
neighbour and another then we can use this information to calculate a
starting temperature. Another method, suggested in (13. Rayward-Smith, V.J., Osman, I.H., Reeves, C.R., Smith, G.D. 1996. Modern Heuristic Search Methods. John Wiley & Sons.), is to start with a very high temperature and cool it rapidly
until about 60% of worst solutions are being accepted. This forms the
real starting temperature and it can now be cooled more slowly. A
similar idea, suggested in (5. Dowsland, K.A. 1995. Simulated Annealing. In Modern Heuristic Techniques for Combinatorial Problems (ed. Reeves, C.R.), McGraw-Hill, 1995), is to rapidly heat the
system until a certain proportion of worse solutions are accepted and
then slow cooling can start. This can be seen to be similar to how
physical annealing works in that the material is heated until it is
liquid and then cooling begins (i.e. once the material is a liquid it
is pointless carrying on heating it). [from notes from University of Nottingham]
But the choice of the best for your application has to be based on numerous tests, as most of the things in the machine learning. If you are dealing with the problem, where you are really concerned about well trained neural network, it seems resonable to interest in Extreme Machine Learning, and Extreme Learning Machines (ELM), where the neural network training is conducted in the global optimization procedure, which guarantees the best possible solution (under used regularized cost function). Simulated annleaing, as a interative, greedy process (as well as back propagation) cannot guarantee anything, there are only heuristics and rules of thumb.

"Error: 16" in Eclipse Run

I am writing a simple program designed to correctly bin elements from a large text file and re-display them in an accessible way. When I run it in Eclipse, it appears to run fine (running a small test file reveals that no data is missing from the final bins), but the eclipse editor displays "Error: 16." Can someone tell me what this is? Thanks.
Edit:
Here is my complete console input. It is entirely test outputs during the run, with that error at the end:
0 4 7 1 0 2 3 4 6 7 6 7 6 7 6 1 1 1 1 3 3 3 2 2 2 2 2 1 1 1 0 0 0 8 8 9 9 9 9 9 9 9 9 9 9 95.80,
1.345345,2.3880140,2.43,
6.4.576,
74.5,34.58431,43.50,
67,23.080,
423,
456,
456.2,
6.5,2.8,
3153,29.6,
97.685,
56.821233,
745623482281234,7.64,
6788464,
86,
1.234234,
78.84321,
4.5688,
5672,
45.62,
41.5321,
3.45,56745848867,
4.5234523,
742,
528488,
34.58652,
2340852,243.52345,22.46246,4.35,
345.2347674567,
23452.14352435,
2.68488,2457537453,
2435.23452345,
4358,2.021564,
Error: 16

Categories

Resources