Turn Chunk of Files into Pages in Java - java

I have a code where i read a large file in chunks and transform it into JSON to send it to a Spring MVC controller, my method looks something like this, i pass where i want to start in the offset and the chunkSize in the noBytes, the result is passed to be transformed into JSON and sent to my controller
public String readByteBlock(File file, int offset, int noBytes) throws IOException {
InputStream in = new FileInputStream(file);
byte[] result = new byte[noBytes];
in.skip(offset);
in.read(result, 0, noBytes);
return FileSaver.byteArrayToString(result, noBytes);
}
I am having problem to make the next and previous pagination work in my controller
#RequestMapping("/page/{id}")
#ResponseBody
File file = new File("C:/teste.txt");
FileSaver fs = new FileSaver();
int byteblock = 100;
int offset = 0;
if(id.equals("next")){
offset = (int) req.getSession().getAttribute("prevEndPos");
String s = fs.readByteBlock(file, offset, byteblock);
req.getSession().setAttribute("prevStartPos", offset);
req.getSession().setAttribute("prevEndPos", (offset + byteblock) );
return s;
}
if(id.equals("prev")){
offset = (int) req.getSession().getAttribute("prevStartPos") - byteblock ;
if(offset < 0 ){
String s = fs.readByteBlock(file, 0, byteblock);
return s;
}
String s = fs.readByteBlock(file, offset, byteblock);
req.getSession().setAttribute("prevStartPos", (offset - byteblock) );
return s;
}
String s = fs.readByteBlock(file, 0, byteblock);
req.getSession().setAttribute("prevStartPos", offset);
req.getSession().setAttribute("prevEndPos", (offset + byteblock));
return s;
So each chunk of the file will be put into an html table in my application,i saved the current block startByte and endByte into my session but when the user press the previous button on my page how do i know where the previous block started and ended? since the session will hold the current page and not the previous one

First a little general about pagination.
Simple pagination only has two properties:
A page size
A page number
The back-end will sends back the data and the following metadata:
Total number of pages
The current page number
The current page size
The client is then responsible for asking for the chunk of data it wants, by sending the page number and page size, the backend will then calculate the offset and end. The only thing to be aware of is that if the user increases the page size, then the total number of pages (that the client received in the previous call) is no longer valid, but in those case the server just returns no data, and new metadata.
This approach is used by many frameworks, like Spring Data on the backend, and EXT JS for front-end.
Since you are using Spring, you should just create a DTO class that holds both the data and meta data:
Class PageResult {
String data;
int totalPageCount;
int currentPage;
int pageSize;
}
In you are using String for the transporting data be aware that you may have to call JSON.parse() on the front-end, and your JSON will be string escaped which looks silly; if you are reading you data using Jacksons ObjectMapper I would recommend using a ObjectMapper.readTree() which returns a JsonNode, and put that into your DTO class.
On the front-end you can do a dropdown with suggested page sizes, and an input box for the page number. If you want next and previous, just send a request with current page +/- 1 and the page size, then the backend will calculate start, end and total pages.

Instead of adding two int attributes to your session, why not add two arrays of ints, containing the appropriate values? And also add a 'page' attribute, so you can do something like
if(id.equals("prev") {
List<Integer> offsetStarts = req.getSession.getAttribute("offsetStarts);
List<Integer> offsetEnds = req.getSession.getAttribute("offsetEnds);
int lastPage = req.getSession.getAttribute("page");
int desiredPage = lastPage -1;
int offsetStart = offsetStarts.get(desiredPage);
int offsetEnd = offsetEnd.get(desiredPage);
// get the chunk as desired, and return it
req.getSession().setAttribute("page", desiredPage);
}
// if it's a new chunk, append offsets to the arrays

Related

Export multiple images in one byte array (BLOB IBM DB2) to disk

I have a column "Content" (BLOB data) in database (IBM DB2) and the data of an record same that (https://drive.google.com/file/d/12d1g5jtomJS-ingCn_n0GKMsM4RkdYzB/view?usp=sharing)
I have opened it by editor and I think that it has more than one image in this (https://i.stack.imgur.com/2biLN.png, https://i.stack.imgur.com/ZwBOs.png).
I can export an image from byte array (using C#) to my disk, but with multiple images, I don't know how to do it.
Please help me! Thanks!
Edit 1:
I have tried export it as only one image by this code:
private void readBLOB(DB2Connection conn, DB2Transaction trans)
{
try
{
string SavePath = #"D:\\MyBLOB";
long CurrentIndex = 0;
//the number of bytes to store in the array
int BufferSize = 413454;
//The Number of bytes returned from GetBytes() method
long BytesReturned;
//A byte array to hold the buffer
byte[] Blob = new byte[BufferSize];
DB2Command cmd = conn.CreateCommand();
cmd.CommandText = "SELECT ATTR0102500126 " +
" FROM JCR.ICMUT01278001 " +
" WHERE COMPKEY = 'N21E26B04900FC6B1F00000'";
cmd.Transaction = trans;
DB2DataReader reader;
reader = cmd.ExecuteReader(CommandBehavior.SequentialAccess);
if (reader.Read())
{
FileStream fs = new FileStream(SavePath + "\\" + "quang canh.jpg", FileMode.OpenOrCreate, FileAccess.Write);
BinaryWriter writer = new BinaryWriter(fs);
//reset the index to the beginning of the file
CurrentIndex = 0;
BytesReturned = reader.GetBytes(
0, //the BlobsTable column index
CurrentIndex, // the current index of the field from which to begin the read operation
Blob, // Array name to write the buffer to
0, // the start index of the array
BufferSize // the maximum length to copy into the buffer
);
while (BytesReturned == BufferSize)
{
writer.Write(Blob);
writer.Flush();
CurrentIndex += BufferSize;
BytesReturned = reader.GetBytes(0, CurrentIndex, Blob, 0, BufferSize);
}
writer.Write(Blob, 0, (int)BytesReturned);
writer.Flush(); writer.Close();
fs.Close();
}
reader.Close();
}
catch (Exception e)
{
Console.WriteLine(e.Message);
}
}
But can not view the image, it show format error => https://i.stack.imgur.com/PNS9Q.png
Your are currently asuming all BLOBS in that DB are JPEG Images. But that is clearly not the case.
Option 1: This is a faulty data
Programms that save to databases can fail.
Databases themself might fail, especially if transactions are turned off. Transactions are most likely turned off for BLOB's.
The physical disk the data was stored on might have degraded. And again, you will not get a lot of redundancy and error correction with BLOBS (plus getting use of the Error correction requires going through the proper DBMS in the first place).
Option 2: This is not a jpg
I know article about Unicode that says "[...]problem comes down to one naive programmer who didn’t understand the simple fact that if you don’t tell me whether a particular string is encoded using UTF-8 or ASCII or ISO 8859-1 (Latin 1) or Windows 1252 (Western European), you simply cannot display it correctly or even figure out where it ends."
This applies doubly, triply and quadruply to images:
this could be any number of formats that uses Interlacing.
this could could be a professional graphics programms image/project file like TIFF. Which can totally contain multiple images - up to one per layer you are working with.
this could even be a .SVG file (XML text that contains drawing orders) that was run through a .ZIP compression and a word document
this could even be a PDF, where the images are usually appended at the back (allowing you to read the text with a partial file, similar to interleaving)

How to extract end of data written inside a file in Java

I create an empty file with desired length in Android using Java like this:
long length = 10 * 1024 * 1024 * 1024;
String file = "PATH\\File.mp4";
RandomAccessFile randomAccessFile = new RandomAccessFile(file, "rw");
randomAccessFil.setLength(length);
That code creates a file with desired length and with NULL data. Then I write data into the file like this:
randomAccessFile.write(DATA);
Now my question is : I want to extract end of data written into the File. I have written this function to find end of data as fast as possible with binary search:
long extractEndOfData(RandomAccessFile accessFile, long from, long end) throws IOException {
accessFile.seek(from);
if (accessFile.read() == 0) {
//this means no data has written into the file
return 0;
}
accessFile.seek(end);
if (accessFile.read() != 0) {
return end + 1;
}
long mid = (from + end) / 2;
accessFile.seek(mid);
if (accessFile.read() == 0) {
return extractEndOfData(accessFile, from, mid - 1);
} else {
if (accessFile.read() == 0) {
return mid + 1;
} else {
return extractEndOfData(accessFile, mid + 1, end);
}
}
}
and I call that function like this to find end of data into the file:
long endOfData = extractEndOfData(randomAccessFile, 0, randomAccessFile.length() - 1);
That function works fine for Files that their data begin with NON-NULL data and there is not any NULL data among data like this:
But for some some files it does not. because some files begin with NULL data as this:
What can i do to solve this problem? Thanks a lot.
I think your issue is clear: You will never be able to find out how much data is written (or where the end of the content) is, when you are only searching for a NULL inside the file. The reason is that NULL is a byte with the value 0x00, which appears in all kinds of binary files (maybe not textfiles) and on the other side, your file is initialized with NULLs.
What you could do is for example to store the size of your data written to the file in the first four bytes of the file.
So when writing the DATA to the file, first write the length of it, and then the actual data content.
But I am still wondering why you don't initialize the file's size to the size you need.

How can I check the Payload size when using Azure Eventhubs and avoid a PayloadSizeExceededException?

So I am getting this exception:
com.microsoft.azure.servicebus.PayloadSizeExceededException: Size of the payload exceeded Maximum message size: 256 kb
I believe the exception is self explanatory, however, I an not sure what to do about it.
private int MAXBYTES = (int) ((1024 * 256) * .8);
for (EHubMessage message : payloads) {
byte[] payloadBytes = message.getPayload().getBytes(StandardCharsets.UTF_8);
EventData sendEvent = new EventData(payloadBytes);
events.add(sendEvent);
byteCount += payloadBytes.length;
if (byteCount > this.MAXBYTES) {
calls.add(ehc.sendASync(events));
logs.append("[Size:").append(events.size()).append(" - ").append(byteCount / 1024).append("kb] ");
events = new LinkedList<EventData>();
byteCount = 0;
pushes++;
}
}
I am counting the bytes and such. I have thought through the UTF-8 thing but I believe that should not matter. UTF-8 can be more than one byte, but it should be counted correctly with the "getBytes".
I could not find a reliable way to get the bytes in a string and I am not even sure how Azure counts the bytes. "payload" is a broad statement. Could include the boilerplate stuff and such.
Any Ideas? It would be great if there was a
EventHubClient.checkPayload(list);
method but there doesn't seem to be. How do you guys check the Payload Size?
Per my experience, I think you need to check the size of the current payload count and a new payload first before you add the new payload into events, as below.
const int MAXBYTES = 1024 * 256; // not necessary to multiply by .8
for (EHubMessage message : payloads) {
byte[] payloadBytes = message.getPayload().getBytes(StandardCharsets.UTF_8);
if (byteCount + payloadBytes.length > this.MAXBYTES) {
calls.add(ehc.sendASync(events));
logs.append("[Size:").append(events.size()).append(" - ").append(byteCount / 1024).append("kb] ");
events = new LinkedList<EventData>();
byteCount = 0;
pushes++;
}
EventData sendEvent = new EventData(payloadBytes);
events.add(sendEvent);
}
If you first added the new event data to count the payload size, it's too late and the data size of events which will be sent that might be exceed the payload limits.
Well, I should have added more of the actual code then I did in the original post. Here is what I came up with:
private int MAXBYTES = (int) ((1024 * 256) * .9);
for (EHubMessage message : payloads) {
byte[] payloadBytes = message.getPayload().getBytes(StandardCharsets.UTF_8);
int propsSize = message.getProps() == null ? 0 : message.getProps().toString().getBytes().length;
int messageSize = payloadBytes.length + propsSize;
if (byteCount + messageSize > this.MAXBYTES) {
calls.add(ehc.sendASync(events));
logs.append("[Size:").append(events.size()).append(" - ").append(byteCount / 1024).append("kb] ");
events = new LinkedList<EventData>();
byteCount = 0;
pushes++;
}
byteCount += messageSize;
EventData sendEvent = new EventData(payloadBytes);
sendEvent.getProperties().putAll(message.getProps());
events.add(sendEvent);
}
if (!events.isEmpty()) {
calls.add(ehc.sendASync(events));
logs.append("[Size:").append(events.size()).append(" - ").append(byteCount / 1024).append("kb]");
pushes++;
}
// lets wait til they are done.
CompletableFuture.allOf(calls.toArray(new CompletableFuture[0])).join();
}
If you notice, I was adding Properties to the EventData but not counting the bytes. The toString() for a Map returns something like:
{markiscool=markiscool}
Again, I am not sure of the boilerplate characters that the Azure api is adding but I am sure it is not much. Notice I still back off the MAXBYTES a bit just in case.
It would still be good to get a "payload size checker" method in the api but I would imagine that it would have to build the payload first to give it back to you. I experimented with having my EHubMessage object figure this out for me, but "getBytes()" on a String actually does some conversion that I don't want to do twice.

read formatted BLE adverting data in android logcat but cant invoke it

I aim to use code via https://github.com/davidgyoung/ble-advert-counter/blob/master/app/src/main/java/com/radiusnetworks/blepacketcounter/MainActivity.java
to scan and read BLE device's adverting data.
The code works well. I could get the formatted adverting data via LogCat as pic shown.
But in the code I can't find the related log statement.
I didnt see BluetoothLeScanner class or onScanResult() method invoked.
And I want to obtain the String "ScanResult{mDevice=F3:E5:7F:73:4F:81, mScanRecord=ScanRecord..." to get the formatted data value.
How can I achieve this?
Thanks
I'm not sure about the logs but here's how you can get the data.
onLeScan() callback has all the information that is being printed in the logs. To get the device information you can use the device object from the call back(ex. device.getAddress()). Scan record will be in the callback's scanRecord byte array. You need to parse the array to get the information. I've used below code to parse the scan information.
public WeakHashMap<Integer, String> ParseRecord(byte[] scanRecord) {
WeakHashMap<Integer, String> ret = new WeakHashMap<>();
int index = 0;
while (index < scanRecord.length) {
int length = scanRecord[index++];
//Zero value indicates that we are done with the record now
if (length == 0) break;
int type = scanRecord[index];
//if the type is zero, then we are pass the significant section of the data,
// and we are thud done
if (type == 0) break;
byte[] data = Arrays.copyOfRange(scanRecord, index + 1, index + length);
if (data != null && data.length > 0) {
StringBuilder hex = new StringBuilder(data.length * 2);
// the data appears to be there backwards
for (int bb = data.length - 1; bb >= 0; bb--) {
hex.append(String.format("%02X", data[bb]));
}
ret.put(type, hex.toString());
}
index += length;
}
return ret;
}
Refer the below link to understand about the ble date advertisement.
BLE obtain uuid encoded in advertising packet
Hope this helps.

Maximum as3 adobe JSON string length

I've written Socket Communication server with Java and a AIR programm with AS3, using Socket connection.
The communication through socket connection is done with JSON serialization.
Sometimes with really long JSON strungs over socket, AS3 code says that there is a JSON parse error.
Each JSON string I end with end string to let programm know, that it is not the end of the message, so this is not the problem with AIR programm reading the message in parts.
The error occurs only with realy long json string, for example, string with 78031 length. Is there any limits for JSON serialization?
I had the same problem. The problem is in Flash app reading data from socket.
The point is that Flash ProgressEvent.SOCKET_DATA event fires even when server didn't send all the data, and something is left (especially when the data is big and the connection is slow).
So something like {"key":"value"} comes in two (or more) parts, like: {"key":"val and ue"}. Also sometimes you might receive several joined JSONs in one message like {"json1key":"value"}{"json2key":"value"} - built-in Flash JSON parser cannot handle these too.
To fight this I recommend you to modify your SocketData handler in the Flash app to add a cache for received strings. Like this:
// declaring vars
private var _socket:Socket;
private var _cache: String = "";
// adding EventListener
_socket.addEventListener(ProgressEvent.SOCKET_DATA, onSocketData);
private function onSocketData(e: Event):void
{
// take the incoming data from socket
var fromServer: ByteArray = new ByteArray;
while (_socket.bytesAvailable)
{
_socket.readBytes(fromServer);
}
var receivedToString: String = fromServer.toString();
_cache += receivedToString;
if (receivedToString.length == 0) return; // nothing to parse
// convert that long string to the Vector of JSONs
// here is very small and not fail-safe alghoritm of detecting separate JSONs in one long String
var jsonPart: String = "";
var jsonVector: Vector.<String> = new Vector.<String>;
var bracketsCount: int = 0;
var endOfLastJson: int = 0;
for (var i: int = 0; i < _cache.length; i++)
{
if (_cache.charAt(i) == "{") bracketsCount += 1;
if (bracketsCount > 0) jsonPart = jsonPart.concat(_cache.charAt(i));
if (_cache.charAt(i) == "}")
{
bracketsCount -= 1;
if (bracketsCount == 0)
{
jsonVector.push(jsonPart);
jsonPart = "";
endOfLastJson = i;
}
}
}
// removing part that isn't needed anymore
if (jsonVector.length > 0)
{
_cache = _cache.substr(endOfLastJson + 1);
}
for each (var part: String in jsonVector)
{
trace("RECEIVED: " + part); // voila! here is the full received JSON
}
}
According to Adobe, it appears that you are not facing a JSON problem but instead a Socket limitation.
A String you may send over a Socket via writeUTF and readUTF is limited by 65,535 bytes. This is due to the string being prepended with a 16 bit unsigned integer rather than a null terminated string.

Categories

Resources