RIAK high diskspace usage - java

I am evaluating RIAK kV V2.1.1 on a local desktop using java client and a little customised version of the sample code
And my concern is I found it to be taking almost 920bytes per KV.
That's too steep. The data dir was 93 mb for 100k kvs and kept increasing linearly there after for every 100k Store ops.
Is that expected.
RiakCluster cluster = setUpCluster();
RiakClient client = new RiakClient(cluster);
System.out.println("Client object successfully created");
Namespace quotesBucket = new Namespace("quotes2");
long start = System.currentTimeMillis();
for(int i=0; i< 100000; i++){
RiakObject quoteObject = new RiakObject().setContentType("text/plain").setValue(BinaryValue.create("You're dangerous, Maverick"));
Location quoteObjectLocation = new Location(quotesBucket, ("Ice"+i));
StoreValue storeOp = new StoreValue.Builder(quoteObject).withLocation(quoteObjectLocation).build();
StoreValue.Response storeOpResp = client.execute(storeOp);
}

There was a thread on the riak users mailing list a while back that discussed the overhead of the riak object, estimating it at ~400 bytes per object. However, that was before the new object format was introduced, so it is outdated. Here is a fresh look.
First we need a local client
(node1#127.0.0.1)1> {ok,C}=riak:local_client().
{ok,{riak_client,['node1#127.0.0.1',undefined]}}
Create a new riak object with a 0-byte value
(node1#127.0.0.1)2> Obj = riak_object:new(<<"size">>,<<"key">>,<<>>).
#r_object{bucket = <<"size">>,key = <<"key">>,
contents = [#r_content{metadata = {dict,0,16,16,8,80,48,
{[],[],[],[],[],[],[],[],[],[],[],[],[],[],...},
{{[],[],[],[],[],[],[],[],[],[],[],[],...}}},
value = <<>>}],
vclock = [],
updatemetadata = {dict,1,16,16,8,80,48,
{[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],...},
{{[],[],[],[],[],[],[],[],[],[],[],[],[],...}}},
updatevalue = undefined}
The object is actually stored in a reduced binary format:
(node1#127.0.0.1)3> byte_size(riak_object:to_binary(v1,Obj)).
36
That is 36 bytes overhead for just the object, but that doesn't include the metadata like last updated time or the version vector, so store it in Riak and check again.
(node1#127.0.0.1)4> C:put(Obj).
ok
(node1#127.0.0.1)5> {ok,Obj1} = C:get(<<"size">>,<<"key">>).
{ok, #r_object{bucket = <<"size">>,key = <<"key">>,
contents = [#r_content{metadata = {dict,3,16,16,8,80,48,
{[],[],[],[],[],[],[],[],[],[],[],[],[],[],...},
{{[],[],[],[],[],[],[],[],[],[],[[...]],[...],...}}},
value = <<>>}],
vclock = [{<<204,153,66,25,119,94,124,200,0,0,156,65>>,
{3,63654324108}}],
updatemetadata = {dict,1,16,16,8,80,48,
{[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],...},
{{[],[],[],[],[],[],[],[],[],[],[],[],[],...}}},
updatevalue = undefined}}
(node1#127.0.0.1)6> byte_size(riak_object:to_binary(v1,Obj)).
110
Now it is 110 bytes overhead for an empty object with a single entry in the version vector. If a subsequent put of the object is coordinated by a different vnode, it will add another entry. I've selected the bucket and key names so that the local node is not a member of the preflist, so the second put has a fair probability of being coordinated by a different node.
(node1#127.0.0.1)7> C:put(Obj1).
ok
(node1#127.0.0.1)8> {ok,Obj2} = C:get(<<"size">>,<<"key">>).
{ok, #r_object{bucket = <<"size">>,key = <<"key">>,
contents = [#r_content{metadata = {dict,3,16,16,8,80,48,
{[],[],[],[],[],[],[],[],[],[],[],[],[],[],...},
{{[],[],[],[],[],[],[],[],[],[],[[...]],[...],...}}},
value = <<>>}],
vclock = [{<<204,153,66,25,119,94,124,200,0,0,156,65>>,
{3,63654324108}},
{<<85,123,36,24,254,22,162,159,0,0,78,33>>,{1,63654324651}}],
updatemetadata = {dict,1,16,16,8,80,48,
{[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],...},
{{[],[],[],[],[],[],[],[],[],[],[],[],[],...}}},
updatevalue = undefined}}
(node1#127.0.0.1)9> byte_size(riak_object:to_binary(v1,Obj2)).
141
Which is another 31 bytes added for an additional entry in the version vector.
These numbers don't include storing the actual bucket and key names with the value, or Bitcask storing them again in a hint file, so the actual space on disk would then be 2x(bucketname size + keyname size) + value overhead + file structure overhead + checksum/hash size
If you're using bitcask, there is a calculator in the documentation that will help you estimate disk and memory requirements: http://docs.basho.com/riak/kv/2.2.0/setup/planning/bitcask-capacity-calc/
If you use eLevelDB, you have the option of snappy compression which could reduce the size on disk.

Related

How can I read user data (memory) from EPC RFID tag through LLRP?

I encode two EPC tags through "NiceLabel Pro" with data:
First tag: EPC: 555555555, UserData: 9876543210123456789
Second tag: EPC: 444444444, UserData: 123456789123456789
Now I'm trying to get that data through LLRP (in my Java application):
My LLRPClient (one function):
public void PrepareInventoryRequest() {
AccessCommand accessCommand = new AccessCommand();
// A list to hold the op specs for this access command.
accessCommand.setAccessCommandOpSpecList(GenerateOpSpecList());
// Create a new tag spec.
C1G2TagSpec tagSpec = new C1G2TagSpec();
C1G2TargetTag targetTag = new C1G2TargetTag();
targetTag.setMatch(new Bit(1));
// We want to check memory bank 1 (the EPC memory bank).
TwoBitField memBank = new TwoBitField("2");
targetTag.setMB(memBank);
// The EPC data starts at offset 0x20.
// Start reading or writing from there.
targetTag.setPointer(new UnsignedShort(0));
// This is the mask we'll use to compare the EPC.
// We want to match all bits of the EPC, so all mask bits are set.
BitArray_HEX tagMask = new BitArray_HEX("00");
targetTag.setTagMask(tagMask);
// We only only to operate on tags with this EPC.
BitArray_HEX tagData = new BitArray_HEX("00");
targetTag.setTagData(tagData);
// Add a list of target tags to the tag spec.
List <C1G2TargetTag> targetTagList =
new ArrayList<>();
targetTagList.add(targetTag);
tagSpec.setC1G2TargetTagList(targetTagList);
// Add the tag spec to the access command.
accessCommand.setAirProtocolTagSpec(tagSpec);
accessSpec.setAccessCommand(accessCommand);
...
private List<AccessCommandOpSpec> GenerateOpSpecList() {
// A list to hold the op specs for this access command.
List <AccessCommandOpSpec> opSpecList =
new ArrayList<>();
// Set default opspec which for eventcycle of accessspec 3.
C1G2Read opSpec1 = new C1G2Read();
// Set the OpSpecID to a unique number.
opSpec1.setOpSpecID(new UnsignedShort(1));
opSpec1.setAccessPassword(new UnsignedInteger(0));
// We'll read from user memory (bank 3).
TwoBitField opMemBank = new TwoBitField("3");
opSpec1.setMB(opMemBank);
// We'll read from the base of this memory bank (0x00).
opSpec1.setWordPointer(new UnsignedShort(0));
// Read two words.
opSpec1.setWordCount(new UnsignedShort(0));
opSpecList.add(opSpec1);
return opSpecList;
}
My tag handler function:
private void updateTable(TagReportData tag) {
if (tag != null) {
EPCParameter epcParam = tag.getEPCParameter();
String EPCStr;
List<AccessCommandOpSpecResult> accessResultList = tag.getAccessCommandOpSpecResultList();
for (AccessCommandOpSpecResult accessResult : accessResultList) {
if (accessResult instanceof C1G2ReadOpSpecResult) {
C1G2ReadOpSpecResult op = (C1G2ReadOpSpecResult) accessResult;
if ((op.getResult().intValue() == C1G2ReadResultType.Success) &&
(op.getOpSpecID().intValue() < 1000)) {
UnsignedShortArray_HEX userMemoryHex = op.getReadData();
System.out.println("User Memory read from the tag is = " + userMemoryHex.toString());
}
}
}
...
For the first tag, "userMemoryHex.toString()" = "3938 3736"
For the second tag, "userMemoryHex.toString()" = "3132 3334"
Why? How do I get all user data?
This is my rfid tag.
The values that you get seem to be the first 4 characters of the number (interpreted as an ASCII string):
39383736 = "9876" (when interpreting those 4 bytes as ASCII characters)
31323334 = "1234" (when interpreting those 4 bytes as ASCII characters)
Since the specification of your tag says
Memory: EPC 128 bits, User 32 bits
your tag can only contain 32 bits (= 4 bytes) of user data. Hence, your tag simply can't contain the full value (i.e. 9876543210123456789 or 123456789123456789) that you tried to write as UserData (regardless of whether this was interpreted as a decimal number or a string).
Instead, your writer application seems to have taken the first 4 characters of those values, encoded them in ASCII, and wrote them to the tag.

Change disk size while cloning vm from template in vmware in java

I am very new to the vmware. I have requirement to change the Hard disk size while creating vm from template. Basically its cloning. But when i try to excecute it gives me the error "a specified parameter was not correct device.key".
Can you please help me here.
Here is my code:
VirtualMachineRelocateSpec relocateSpec = new VirtualMachineRelocateSpec();
VirtualMachineCloneSpec cloneSpec = new VirtualMachineCloneSpec();
VirtualDeviceConfigSpec diskSpec = new VirtualDeviceConfigSpec();
diskSpec.setOperation(VirtualDeviceConfigSpecOperation.edit);
VirtualDisk vd = new VirtualDisk();
long diskSizeKB = 1000000;
int cKey = 1000;
vd.setCapacityInKB(diskSizeKB);
diskSpec.setDevice(vd);
vd.setControllerKey(cKey);
vd.setKey(1);
vd.setUnitNumber(2);
VirtualDiskFlatVer2BackingInfo diskfileBacking = new VirtualDiskFlatVer2BackingInfo();
String fileName = "[TestDataStore]";
diskfileBacking.setFileName(fileName);
diskfileBacking.setDiskMode("persistent");
diskfileBacking.setThinProvisioned(true);
vd.setBacking(diskfileBacking);
relocateSpec.setDatastore(vmInstace.getDatastores()[0].getMOR());
relocateSpec.setHost(hostSystem.getMOR());
relocateSpec.setPool(resourcePool.getMOR());
cloneSpec.setPowerOn(false);
cloneSpec.setLocation(relocateSpec);
VirtualMachineConfigSpec vmSpec = new VirtualMachineConfigSpec();
vmSpec.setMemoryMB(4000L);
vmSpec.setNumCPUs(3);
vmSpec.setDeviceChange(new VirtualDeviceConfigSpec[] {diskSpec});
cloneSpec.setConfig(vmSpec);
Task task = vmInstace.cloneVM_Task((Folder) vmInstace.getParent(),"TestVM", cloneSpec);
Each device (disk, controller, etc.) of a VM has its own unique key. The way VM configuration changes work is that you provide the key of the device you want to change, along with the new configuration.
In your code, you call vd.setKey(1), and VMware is telling you that you gave an invalid key.
Where did you get the value 1? If I had to guess, it was chosen arbitrarily. You will need to look at the configuration of the template and extract the disk device key from there. Then use this key in the call to vd.setKey.

Java OutOfMemoryError in apache Jena using TDB

Hi I've been using Jena for a project and now I am trying to query a Graph for storage in plain files for batch processing with Hadoop.
I open a TDB Dataset and then I query by pages with LIMIT and OFFSET.
I output files with 100000 triplets per file.
However at file 10th the performance degrades and at file 15th it goes down by a factor of 3 and at the 22th file the performances is down to 1%.
My query is:
SELECT DISTINCT ?S ?P ?O WHERE {?S ?P ?O .} LIMIT 100000 OFFSET X
The method that queries and writes to a file is shown in the next code block:
public boolean copyGraphPage(int size, int page, String tdbPath, String query, String outputDir, String fileName) throws IllegalArgumentException {
boolean retVal = true;
if (size == 0) {
throw new IllegalArgumentException("The size of the page should be bigger than 0");
}
long offset = ((long) size) * page;
Dataset ds = TDBFactory.createDataset(tdbPath);
ds.begin(ReadWrite.READ);
String queryString = (new StringBuilder()).append(query).append(" LIMIT " + size + " OFFSET " + offset).toString();
QueryExecution qExec = QueryExecutionFactory.create(queryString, ds);
ResultSet resultSet = qExec.execSelect();
List<String> resultVars;
if (resultSet.hasNext()) {
resultVars = resultSet.getResultVars();
String fullyQualifiedPath = joinPath(outputDir, fileName, "txt");
try (BufferedWriter bwr = new BufferedWriter(new OutputStreamWriter(new BufferedOutputStream(
new FileOutputStream(fullyQualifiedPath)), "UTF-8"))) {
while (resultSet.hasNext()) {
QuerySolution next = resultSet.next();
StringBuffer sb = new StringBuffer();
sb.append(next.get(resultVars.get(0)).toString()).append(" ").
append(next.get(resultVars.get(1)).toString()).append(" ").
append(next.get(resultVars.get(2)).toString());
bwr.write(sb.toString());
bwr.newLine();
}
qExec.close();
ds.end();
ds.close();
bwr.flush();
} catch (IOException e) {
e.printStackTrace();
}
resultVars = null;
qExec = null;
resultSet = null;
ds = null;
} else {
retVal = false;
}
return retVal;
}
The null variables are there because I didn't know if there was a possible leak in there.
However after the 22th file the program fails with the following message:
java.lang.OutOfMemoryError: GC overhead limit exceeded
at org.apache.jena.ext.com.google.common.cache.LocalCache$EntryFactory$2.newEntry(LocalCache.java:455)
at org.apache.jena.ext.com.google.common.cache.LocalCache$Segment.newEntry(LocalCache.java:2144)
at org.apache.jena.ext.com.google.common.cache.LocalCache$Segment.put(LocalCache.java:3010)
at org.apache.jena.ext.com.google.common.cache.LocalCache.put(LocalCache.java:4365)
at org.apache.jena.ext.com.google.common.cache.LocalCache$LocalManualCache.put(LocalCache.java:5077)
at org.apache.jena.atlas.lib.cache.CacheGuava.put(CacheGuava.java:76)
at org.apache.jena.tdb.store.nodetable.NodeTableCache.cacheUpdate(NodeTableCache.java:205)
at org.apache.jena.tdb.store.nodetable.NodeTableCache._retrieveNodeByNodeId(NodeTableCache.java:129)
at org.apache.jena.tdb.store.nodetable.NodeTableCache.getNodeForNodeId(NodeTableCache.java:82)
at org.apache.jena.tdb.store.nodetable.NodeTableWrapper.getNodeForNodeId(NodeTableWrapper.java:50)
at org.apache.jena.tdb.store.nodetable.NodeTableInline.getNodeForNodeId(NodeTableInline.java:67)
at org.apache.jena.tdb.store.nodetable.NodeTableWrapper.getNodeForNodeId(NodeTableWrapper.java:50)
at org.apache.jena.tdb.solver.BindingTDB.get1(BindingTDB.java:122)
at org.apache.jena.sparql.engine.binding.BindingBase.get(BindingBase.java:121)
at org.apache.jena.sparql.engine.binding.BindingProjectBase.get1(BindingProjectBase.java:52)
at org.apache.jena.sparql.engine.binding.BindingBase.get(BindingBase.java:121)
at org.apache.jena.sparql.engine.binding.BindingProjectBase.get1(BindingProjectBase.java:52)
at org.apache.jena.sparql.engine.binding.BindingBase.get(BindingBase.java:121)
at org.apache.jena.sparql.engine.binding.BindingBase.hashCode(BindingBase.java:201)
at org.apache.jena.sparql.engine.binding.BindingBase.hashCode(BindingBase.java:183)
at java.util.HashMap.hash(HashMap.java:338)
at java.util.HashMap.containsKey(HashMap.java:595)
at java.util.HashSet.contains(HashSet.java:203)
at org.apache.jena.sparql.engine.iterator.QueryIterDistinct.getInputNextUnseen(QueryIterDistinct.java:106)
at org.apache.jena.sparql.engine.iterator.QueryIterDistinct.hasNextBinding(QueryIterDistinct.java:70)
at org.apache.jena.sparql.engine.iterator.QueryIteratorBase.hasNext(QueryIteratorBase.java:114)
at org.apache.jena.sparql.engine.iterator.QueryIterSlice.hasNextBinding(QueryIterSlice.java:76)
at org.apache.jena.sparql.engine.iterator.QueryIteratorBase.hasNext(QueryIteratorBase.java:114)
at org.apache.jena.sparql.engine.iterator.QueryIteratorWrapper.hasNextBinding(QueryIteratorWrapper.java:39)
at org.apache.jena.sparql.engine.iterator.QueryIteratorBase.hasNext(QueryIteratorBase.java:114)
at org.apache.jena.sparql.engine.iterator.QueryIteratorWrapper.hasNextBinding(QueryIteratorWrapper.java:39)
at org.apache.jena.sparql.engine.iterator.QueryIteratorBase.hasNext(QueryIteratorBase.java:114)
Disconnected from the target VM, address: '127.0.0.1:57723', transport: 'socket'
Process finished with exit code 255
The memory viewer shows an increment in memory usage after querying a page :
It is clear that Jena LocalCache is filling up, I have changed the Xmx to 2048m and Xms to 512m with the same result. Nothing changed.
Do I need more memory?
Do I need to clear something?
Do I need to stop the program and do it in parts?
Is my query wrong?
Does the OFFSET has anyting to do with it?
I read in some old mail postings that you can turn the cache off but I could not find any way to do it. Is there a way to turn cache off?
I know it is a very difficult question but I appreciate any help.
It is clear that Jena LocalCache is filling up
This is the TDB node cache - it usually needs 1.5G (2G is better) per dataset itself. This cache persists for the lifetime of the JVM.
A java heap of 2G is a small Java heap by today's standards. If you must use a small heap, you can try running in 32 bit mode (called "Direct mode" in TDB) but this is less performant (mainly because the node cache is smaller and in this dataset you do have enough nodes to cause cache churn for a small cache).
The node cache is the main cause of the heap exhaustion but the query is consuming memory elsewhere, per query, in DISTINCT.
DISTINCT is not necessarily cheap. It needs to remember everything it has seen to know whether a new row is the first occurrence or already seen.
Apache Jena does optimize some cases of (a TopN query) but the cutoff
for the optimization is 1000 by default. See OpTopN in the code.
Otherwise it is collecting all the rows seen so far. The further through the dataset you go, the more that is in the node cache and also the more than is in the DISTINCT filter.
Do I need more memory?
Yes, more heap. The sensible minimum is 2G per TDB dataset and then whatever Java itself requires (say, 0.5G) and plus your program and query workspace.
You seem to have memory leak somewhere, this is just a guess, but try this:
TDBFactory.release(ds);
REF: https://jena.apache.org/documentation/javadoc/tdb/org/apache/jena/tdb/TDBFactory.html#release-org.apache.jena.query.Dataset-

How to edit a entry sequenced enscribe file

I need some help with this problem. It looks stupid but i could not resolved it. I have a entry sequenced file with variable length records. I only need to replace the first 3 bytes for XXX so i have to rebuild the whole file for this.. The problem i am getting is i am changing the length of all records filling with "NULLS". That's why i have no way to know previously the amount of bytes written for the record.
For example I have this file with three records:
AAAAAAAAAAAAAAAA
BBBBBBBBBBBBBBBBBBBBBBBBBB
CCCCC
DDDDDDDDDDDDDD
The file has a REC attribute of 26 (equals to the length of the second record). When I execute my program to change the first three letters, the file remains that (assume "n" as "null character"):
AAAAAAAAAAAAAAAANNNNNNNNNN
BBBBBBBBBBBBBBBBBBBBBBBBBB
CCCCCNNNNNNNNNNNNNNNNNNNNN
DDDDDDDDDDDDDDNNNNNNNNNNNN
How can i change my program to get what i want?
XXXAAAAAAAAAAAAA
BBBBBBBBBBBBBBBBBBBBBBBBBB
CCCCC
DDDDDDDDDDDDDD
This is my code (java)
EnscribeFile p_origin = new EnscribeFile(file);
String first_record;
byte buffer[];
//First, charge all records and then purge the file content
ArrayList<byte[]> records = new ArrayList<byte[]>();
buffer = new byte[et.getRecordLength()];
p_origin.open(EnscribeOpenOptions.READ_WRITE,EnscribeOpenOptions.SHARED);
EnscribeFileAttributes et = p_origin.getFileInfo();
while ( p_origin.read(buffer,et.getRecordLength()) != EnscribeFile.POSITION_UNUSED )
{
byte auxRecord[] = new byte[et.getRecordLength()];
System.arraycopy(buffer,0,auxRecord,0,et.getRecordLength());
buffer = new byte[et.getRecordLength()];
records.add(auxRecord);
}
p_origin.purgeData();
//Second, modify first record
first_record = new String(records.get(0));
first_record = "XXX" + first_record.substring(3);
records.set(0,first_record.getBytes());
//Third, rewrite the records and close the file
Iterator<byte[]> i = records.iterator();
while( i.hasNext() )
p_origin.write(aux,et.getRecordLength()); //Check the note
p_origin.close();
Note: I can not add a function to get the last character before the first null before write becouse a previous null or nulls at the end of records could be possible and acceptable. Example (remember "N" is "null"):
AAAAAAAAAAAAAAAANN
BBBBBBBBBBBBBBBBBBBBBBBBBB
CCCCCNN
DDDDDDDDDDDDDDNN
Must equal to this after the process:
XXXAAAAAAAAAAAAANN
BBBBBBBBBBBBBBBBBBBBBBBBBB
CCCCCNN
DDDDDDDDDDDDDDNN
Ok, I found the solution at other forum. It is very simple. This method
p_origin.read(...)
returns the length of bytes that i did not know, so it is very simple save a variable the length before creating the new record. With some changes the code becomes:
EnscribeFile p_origin = new EnscribeFile(file);
String first_record;
byte buffer[];
//First, charge all records and then purge the file content
ArrayList<byte[]> records = new ArrayList<byte[]>();
buffer = new byte[et.getRecordLength()];
p_origin.open(EnscribeOpenOptions.READ_WRITE,EnscribeOpenOptions.SHARED);
EnscribeFileAttributes et = p_origin.getFileInfo();
int aux_len = p_origin.read(buffer,et.getRecordLength());
while ( aux_len != EnscribeFile.POSITION_UNUSED )
{
byte auxRecord[] = new byte[aux_len];
System.arraycopy(buffer,0,auxRecord,0,et.getRecordLength());
records.add(auxRecord);
aux_len = p_origin.read(buffer,et.getRecordLength());
}
p_origin.purgeData();
//Second, modify first record
first_record = new String(records.get(0));
first_record = "XXX" + first_record.substring(3);
records.set(0,first_record.getBytes());
//Third, rewrite the records and close the file
Iterator<byte[]> i = records.iterator();
while( i.hasNext() )
{
byte aux_byte[] = i.next();
p_origin.write(aux_byte,aux_byte.length);
}
p_origin.close();

How do I connect to device using jamod and interpret the data

My client wants to control the HVAC systems installed in their site with a custom solution. The HVAC devices provide MODBUS TCP/IP connectivity. I'm new to this field and have no knowledge of MODBUS. I searched the internet and found jamod as a java library for MODBUS. Now I would like to write a program using jamod. But my confusion is how do I get the address of the device I want to connect. And my second problem is even if I manage to connect the device , how can I get required data (in engineering units like temperature) from MODBUS. My questions may sound awful but please forgive me as I'm a novice in this field.
How do I get the address of the device I want to connect to?
This kind of depends on if you're connecting over Modbus RTU or Modbus TCP. RTU (serial) will have a slave id you'll specify while tcp is more direct and the slave id should always be 1.
How can I get required data (in engineering units like temperature) from MODBUS?
Hopefully the data is already formatted in engineering units. Check the device's manual and there should be a table or chart mapping registers to values.
Example:
String portname = "COM1"; //the name of the serial port to be used
int unitid = 1; //the unit identifier we will be talking to, see the first question
int ref = 0; //the reference, where to start reading from
int count = 0; //the count of IR's to read
int repeat = 1; //a loop for repeating the transaction
// setup the modbus master
ModbusCoupler.createModbusCoupler(null);
ModbusCoupler.getReference().setUnitID(1); <-- this is the master id and it doesn't really matter
// setup serial parameters
SerialParameters params = new SerialParameters();
params.setPortName(portname);
params.setBaudRate(9600);
params.setDatabits(8);
params.setParity("None");
params.setStopbits(1);
params.setEncoding("ascii");
params.setEcho(false);
// open the connection
con = new SerialConnection(params);
con.open();
// prepare a request
req = new ReadInputRegistersRequest(ref, count);
req.setUnitID(unitid); // <-- remember, this is the slave id from the first connection
req.setHeadless();
// prepare a transaction
trans = new ModbusSerialTransaction(con);
trans.setRequest(req);
// execute the transaction repeat times because serial connections arn't exactly trustworthy...
int k = 0;
do {
trans.execute();
res = (ReadInputRegistersResponse) trans.getResponse();
for (int n = 0; n < res.getWordCount(); n++) {
System.out.println("Word " + n + "=" + res.getRegisterValue(n));
}
k++;
} while (k < repeat);
// close the connection
con.close();
First, "address" is ambiguous when you're working with Modbus/TCP since there is the IP address of the slave, the unit number of the thing you're talking to (typically 0 for Modbus/TCP), and the address of any registers.
For the "engineering units" question, what you're going to want is the Modbus register map, with any units or conversion factors included. You may also need to know data types, since all Modbus registers are 16 bits.

Categories

Resources