I'm using the java implementation of Bitcoin RPC client.
When I'm calling the createRawTransaction with int type the raw transaction created as expected:
BitcoindRpcClient.TxOutput txOut1 = new BitcoindRpcClient.BasicTxOutput(issuerAddress,
new BigDecimal(1));
When I'm trying to use double value instead of int:
BitcoindRpcClient.TxOutput txOut1 = new BitcoindRpcClient.BasicTxOutput(issuerAddress,
new BigDecimal(1.2));
I'm receiving this error: invalid amount.
When I'm trying it by using bitcoin-cli, it works as expected.
NOTE: I;m working on local testnet blockchain
The output of:
System.out.println(new BigDecimal(1.2));
System.out.println(BigDecimal.valueOf(1.2));
Is:
1.1999999999999999555910790149937383830547332763671875
1.2
So the short answer is to use the preferred way to convert a double: BigDecimal.valueOf(1.2)
The long answer is that floatting numbers are complicated and double is an approximation for 1.2
Related
When spending a postman request to my API, I would like an error to be thrown when I enter a value that is outside the max value for a float, 3.4028235e+38f. (I am using Java & Spring Boot)
I have tried to use a float which was the requirement for the particular field but I haven't been able to work it out. I have tried using a decimal instead but have still not manage to get the error i expected displayed in postman.
// #Max(34028235 + 38)
// MAX_VALUE = 0x1.fffffeP+127f; // 3.4028235e+38f
//Long maxFloatValue = longValue(3.4028235e+38f);
//#Range(max= (longValue(3.4028235e+38f)))
//#Max(maxFloatValue)
#JsonProperty("RequestedAm")
private Float requestedAm;
//#Max (340282300000000000000000000000000000000);
//#JsonProperty("RequestedAm")
//private BigDecimal requestedAm;`
I would like an error to be displayed in postman like it does for the Strings when using Min and Max but not sure how this works for a float or as near as?
If you're using the javax.validation #Max annotation, the JavaDoc states that float an double types are not supported (due to rounding errors). You could try using BigDecimal in your controller (DTO?), then converting as necessary after validation has been run.
Alternatively, you could write a custom validator that does the work, but keep in mind the warning about potential rounding errors - if it were simple the javax.validation annotations would already do it.
I am using the LibSVM record reader to load sparse data into neural networks.
This worked fine when using a MLP model, but when I tried to load data into one of the example CNNs given in one of the problems:
ComputationGraphConfiguration config = new NeuralNetConfiguration.Builder()
.trainingWorkspaceMode(WorkspaceMode.SINGLE).inferenceWorkspaceMode(WorkspaceMode.SINGLE)
//.trainingWorkspaceMode(WorkspaceMode.SEPARATE).inferenceWorkspaceMode(WorkspaceMode.SEPARATE)
.weightInit(WeightInit.RELU)
.activation(Activation.LEAKYRELU)
.updater(Updater.ADAM)
.convolutionMode(ConvolutionMode.Same)
.regularization(true).l2(0.0001)
.learningRate(0.01)
.graphBuilder()
.addInputs("input")
.addLayer("cnn3", new ConvolutionLayer.Builder()
.kernelSize(3, vectorSize)
.stride(1, vectorSize)
.nIn(1)
.nOut(cnnLayerFeatureMaps)
.build(), "input")
.addLayer("cnn4", new ConvolutionLayer.Builder()
.kernelSize(4, vectorSize)
.stride(1, vectorSize)
.nIn(1)
.nOut(cnnLayerFeatureMaps)
.build(), "input")
.addLayer("cnn5", new ConvolutionLayer.Builder()
.kernelSize(5, vectorSize)
.stride(1, vectorSize)
.nIn(1)
.nOut(cnnLayerFeatureMaps)
.build(), "input")
.addVertex("merger", new MergeVertex(), "cnn3", "cnn4", "cnn5")
.addLayer("globalPool", new GlobalPoolingLayer.Builder()
.poolingType(globalPoolingType)
.dropOut(0.5)
.build(), "merger")
.addLayer("out", new OutputLayer.Builder()
.lossFunction(LossFunctions.LossFunction.MCXENT)
.activation(Activation.SOFTMAX)
.nIn(3*cnnLayerFeatureMaps)
.nOut(classes.length)
.build(), "globalPool")
.setOutputs("out")
.setInputTypes(InputType.convolutionalFlat(32,45623,1))
.build();
I got an error that seems to be saying that it was getting 2-dimensional data, but it needs 3-dimensional data (with the third dimension being a trivial one).
Exception in thread "main" java.lang.IllegalArgumentException: Invalid input: expect output columns must be equal to rows 32 x columns 45623 x channels 1 but was instead [32, 45623]
How do I give it the 1 channel dimension?
Failing that, how do I get the CNN to recognize channel-less data, or how do I give a CNN sparse data?
Thank you
The typical problem you run in to when setting up cnns is setting the input type wrong. Deeplearning4j's equivalent of an "input layer" is an input type where we configure common configurations like rnns or cnn flat depending on the type of data you are dealing with. Typically if you are using cnns, you would want to look at the InputType.convolutionalFlat
method.
That will take a flat vector and convert it to a proper 1 channel tensor meant for use with cnns. If you use input type, it will also automatically set things like the number of inputs and outputs for you.
I have a number 10.625
In JavaScript
10.625.toFixed(2) //gives 10.63
In Java,
Double.valueOf(new DecimalFormat("###.##").format(Double.valueOf("10.625"))) //gives 10.62
How I can achieve unique output both at client and server side?
This will print 10.63, and the conversion back to Double will be based on this value.
DecimalFormat df = new DecimalFormat("###.##");
df.setRoundingMode(RoundingMode.HALF_UP);
System.out.println(df.format(Double.valueOf("10.625")));
The way to harmonise the view of the number on client and server is to do your processing on the server and have the client display what the server sends it.
Use the Java version on the server, convert to a String, and send that to the client. Then you'll just be doing the conversion once, and won't have the possibility of a conflict.
After all, the client is just supposed to be giving a nice view of what the server has done. Repeating the calculations on the client is not the right way to think about it.
You could use Math.round(value * 100) / 100 for both java and javascript.
BigDecimal bd = new BigDecimal("10.625");
bd = bd.setScale(2,BigDecimal.ROUND_HALF_UP);
double value = Double.parseDouble(new DecimalFormat(".##").format(bd));
System.out.println(value);
will also print 10.63
Some important infos to catch :
since you never want to know how much digit before . then you can simpy use .##
On DecimalFormat ==> .## format will print 0.x0 to 0.x while '0.00' format will print 0.x0 to 0.x0 so i prefer the second format
Long values in Objects generated by Cloud Endpoints are annotated with #JsonString. This causes a IllegalArgumentException when deserializing those Objects using a GsonFactory.
This is the stacktrace:
Caused by: java.lang.IllegalArgumentException: number type formatted as a JSON number cannot use #JsonString annotation [key updated, field private java.lang.Long com.google.api.services.timetable.model.Lesson.updated]
at com.google.common.base.Preconditions.checkArgument(Preconditions.java:119)
at com.google.api.client.json.JsonParser.parseValue(JsonParser.java:599)
at com.google.api.client.json.JsonParser.parse(JsonParser.java:350)
at com.google.api.client.json.JsonParser.parseValue(JsonParser.java:586)
at com.google.api.client.json.JsonParser.parse(JsonParser.java:289)
at com.google.api.client.json.JsonParser.parse(JsonParser.java:266)
at com.google.api.client.json.JsonFactory.fromString(JsonFactory.java:207)
Example code to produce the Exception:
GsonFactory gsonFactory = new GsonFactory();
Lesson lesson = new Lesson();
lesson.setUpdated(2);
String json = gsonFactory.toString(lesson);
gsonFactory.fromString(json, Lesson.class);
Original discusssion https://groups.google.com/d/msg/endpoints-trusted-testers/-/_TKGoruZVt0J
The reason why this exception occurs is because the Java client library expects all long integers to be quoted (aka strings), because JavaScript can't handle 64-bit integer precision correctly. There's a known issue where the Python SDK won't correctly serialize 64-bit integers as strings. I'm not sure where you're getting the JSON from, exactly, but if it's in user code, you need to make sure you also have 64-bit integers quoted properly.
Does anyone know the "proper" procedure to learn a Bayesian Network from data using the WEKA API? I can't find good instructions in the WEKA documentation.
Based on the documentation and what each function is "supposed" to do, I thought this would work:
Instances ins = DataSource.read( filename );
ins.setClassIndex(0);
K2 learner = new K2();
MultiNomialBMAEstimator estimator = new MultiNomialBMAEstimator();
estimator.setUseK2Prior(true);
EditableBayesNet bn = new EditableBayesNet( ins );
bn.initStructure();
learner.buildStructure(bn, ins);
estimator.estimateCPTs(bn);
But it doesn't. I've tried this and other variations and I keep getting ArrayIndexOutOfBoundsException or NullPointerException somewhere inside WEKA code, so what am I missing?
It works for me. I tried with the following set of data:
#relation test
#attribute x {0,1}
#attribute y {0,1,2}
#attribute z {0,1}
#data
0,1,0
1,0,1
1,1,1
1,2,1
0,0,0
Let me mention that exceptions are expected when your target attribute is not nominal (e.g. numeric). Bayesian Networks work better when all your attributes are nominal. If you change the target attribute to numeric you'll get a NullPointerException or an ArrayIndexOutOfBoundsException. In particular, this exception is thrown at the line:
EditableBayesNet bn = new EditableBayesNet(ins);
You should first discretize your target class.