I am trying to read the data from an NFC card I have for a project. It is using Mifare classic 1k and has 16 sectors.
I am able to connect to the card and I'm trying to read the data (I know the data that I want is in the 2nd sector - 2nd Block). I can scan the card fine and it shows me the size of the card so this ensures me that the card is being scanned properly but the data I get when I Log the "data.readBlock(2)" is just the same as the key I use to authenticate it.
What I understand from the code:
Card connects
Auth == true
I can get overall details of the card such as sector count / block count
protected void onNewIntent(Intent intent){
super.onNewIntent(intent);
Tag tagFromIntent = intent.getParcelableExtra(NfcAdapter.EXTRA_TAG);
MifareClassic tag = MifareClassic.get(tagFromIntent) ;
try {
//Variables
int sectorCount = tag.getSectorCount();
int tagSize = tag.getSize();
boolean auth;
//Keys
byte[] defaultKeys = new byte[]{};
defaultKeys = MifareClassic.KEY_DEFAULT;
//Connecting to tag
tag.connect();
//auth = true
auth = tag.authenticateSectorWithKeyA(2, defaultKeys);
byte[] data = tag.readBlock(2);
Log.i("OnNewIntent", "Data in sector 2: " + Arrays.toString(data));
} catch (IOException e) {
e.printStackTrace();
}
Expected = "Data in sector 2: The data in sector 2 block 2"
Actual = "Data in sector 2: [B#4df9e32"
The above Actual result changes each time the card is scanned.
What you are getting is the object reference Java uses to keep it in memory. To get a readable version of the data instead use:
Arrays.toString(data);
By the way, you may want to change your code to check if the authentication was successful:
authSuccessful = mfc.authenticateSectorWithKeyA(sector, key);
if(authSuccessful){
// Read the block
creditBlock = mfc.readBlock(block);
String bytesString = Arrays.toString(creditBlock);
Log.i(TAG, bytesString);
} else {
Log.e(TAG, "Auth Failed");
}
Finally, I'm pretty sure what you are trying to do is just the standard Mifare card read so avoid jumping to conclusions. As they say in medicine:
Think horses, not zebras
I have fixed this problem eventually by converting the memory location to a string and then converting that string to a UTF-8 format. Cheers for the help
Related
I have a column "Content" (BLOB data) in database (IBM DB2) and the data of an record same that (https://drive.google.com/file/d/12d1g5jtomJS-ingCn_n0GKMsM4RkdYzB/view?usp=sharing)
I have opened it by editor and I think that it has more than one image in this (https://i.stack.imgur.com/2biLN.png, https://i.stack.imgur.com/ZwBOs.png).
I can export an image from byte array (using C#) to my disk, but with multiple images, I don't know how to do it.
Please help me! Thanks!
Edit 1:
I have tried export it as only one image by this code:
private void readBLOB(DB2Connection conn, DB2Transaction trans)
{
try
{
string SavePath = #"D:\\MyBLOB";
long CurrentIndex = 0;
//the number of bytes to store in the array
int BufferSize = 413454;
//The Number of bytes returned from GetBytes() method
long BytesReturned;
//A byte array to hold the buffer
byte[] Blob = new byte[BufferSize];
DB2Command cmd = conn.CreateCommand();
cmd.CommandText = "SELECT ATTR0102500126 " +
" FROM JCR.ICMUT01278001 " +
" WHERE COMPKEY = 'N21E26B04900FC6B1F00000'";
cmd.Transaction = trans;
DB2DataReader reader;
reader = cmd.ExecuteReader(CommandBehavior.SequentialAccess);
if (reader.Read())
{
FileStream fs = new FileStream(SavePath + "\\" + "quang canh.jpg", FileMode.OpenOrCreate, FileAccess.Write);
BinaryWriter writer = new BinaryWriter(fs);
//reset the index to the beginning of the file
CurrentIndex = 0;
BytesReturned = reader.GetBytes(
0, //the BlobsTable column index
CurrentIndex, // the current index of the field from which to begin the read operation
Blob, // Array name to write the buffer to
0, // the start index of the array
BufferSize // the maximum length to copy into the buffer
);
while (BytesReturned == BufferSize)
{
writer.Write(Blob);
writer.Flush();
CurrentIndex += BufferSize;
BytesReturned = reader.GetBytes(0, CurrentIndex, Blob, 0, BufferSize);
}
writer.Write(Blob, 0, (int)BytesReturned);
writer.Flush(); writer.Close();
fs.Close();
}
reader.Close();
}
catch (Exception e)
{
Console.WriteLine(e.Message);
}
}
But can not view the image, it show format error => https://i.stack.imgur.com/PNS9Q.png
Your are currently asuming all BLOBS in that DB are JPEG Images. But that is clearly not the case.
Option 1: This is a faulty data
Programms that save to databases can fail.
Databases themself might fail, especially if transactions are turned off. Transactions are most likely turned off for BLOB's.
The physical disk the data was stored on might have degraded. And again, you will not get a lot of redundancy and error correction with BLOBS (plus getting use of the Error correction requires going through the proper DBMS in the first place).
Option 2: This is not a jpg
I know article about Unicode that says "[...]problem comes down to one naive programmer who didn’t understand the simple fact that if you don’t tell me whether a particular string is encoded using UTF-8 or ASCII or ISO 8859-1 (Latin 1) or Windows 1252 (Western European), you simply cannot display it correctly or even figure out where it ends."
This applies doubly, triply and quadruply to images:
this could be any number of formats that uses Interlacing.
this could could be a professional graphics programms image/project file like TIFF. Which can totally contain multiple images - up to one per layer you are working with.
this could even be a .SVG file (XML text that contains drawing orders) that was run through a .ZIP compression and a word document
this could even be a PDF, where the images are usually appended at the back (allowing you to read the text with a partial file, similar to interleaving)
I'm currently developing a system that gets data from a battery pack of an electric vehicle, stores it in a database and display it on a screen.
So I have a Java - Application that reads the data from a hardware interface, interprets the values and sends it via Socket to a Node.js-Server. (Java App and Webserver are running on the same computer, so Url = localhost)
JAVA APP:
s = new Socket();
s.connect(new InetSocketAddress(URL, PORT));
out = new PrintWriter( s.getOutputStream(), true);
for (DataEntry e : entries){
out.printf(e.toJson());
}
NODE:
sock.on('data', function(data) {
try{
var data = JSON.parse(data);
db.serialize(function(){
db.run("INSERT INTO DataEntry(value, MessageField, time) values(" + data.value + "," + data.messageFieldID + ", STRFTIME('%Y-%m-%d %H:%M:%f'))");
});
} catch(e){}
});
I get about 20 Messages per second from the hardware interface which are converted into 100 Json - Strings. So the webserver has to process one message in 10 ms, which I thought, is manageable.
But here is the problem: If my entries - List (foreach loop) has more than 2 elements, the webserver gets 2 or more of the Json's in one message.
So the first message was divided into 2 parts (ID 41,42) and was processed correctly. But the second message was divided into 5 parts (ID 43-47), and the first 4 of them weren't sent alone, so only the last one was saved correctly.
How can I ensure, that every Json is sent one another?
Isn't there something like a buffer so that the socket.on method is called correctly for every message I send?
I hope somebody of you can help me
Thank you!
Benedikt :)
TCP sockets are just streams and you shouldn't make any assumptions about how much of a "message" is contained in a single packet.
A simple solution to this is to terminate each message with a newline character since JSON cannot contain such a character. From there it's a simple matter of buffering data until you see a newline character. Then call JSON.parse() on your buffer. For example:
var buf = '';
sock.on('data', function(data) {
buf += data;
var p;
// Use a while loop as it may be possible to have multiple
// messages buffered depending on chunk contents
while (~(p = buf.indexOf('\n'))) {
try {
var msg = JSON.parse(buf.slice(0, p));
} catch (ex) {
console.log('Bad JSON message: ' + ex);
}
buf = buf.slice(p + 1);
}
});
You will also need to change printf() to println() on the Java-side so that a newline character will be appended to each message.
I am currently trying to write a jukebox-like application in Java that is able to play any audio source possible, but encountered some difficulties when trying to play radio streams.
For playback I use JLayer from JavaZoom, that works fine as long as the target is a direct media file or a direct media stream (I can play PCM, MP3 and OGG just fine). However I encounter difficulties when trying to play radio streams which either contain pre-media data like a m3u/pls file (which I could fix by adding a detection beforehand), or data that is streamed on port 80 while a web-page exists at the same location and the media transmitted depends on the type of request. In the later case, whenever I try to stream the media, I instead get the HTML data.
Example link of a stream that is hidden behind a web-page: http://stream.t-n-media.de:8030
This is playable in VLC, but if you put it into a browser or my application you'll receive an HTML file.
Is there:
A ready-made, free solution that I could use in place of JLayer? Preferably open source so I can study it?
A tutorial that can help me to write a solution on my own?
Or can someone give me an example on how to properly detect/request a media stream?
Thanks in advance!
import java.io.*;
import java.net.*;
import javax.sound.sampled.*;
import javax.sound.midi.*;
/**
* This class plays sounds streaming from a URL: it does not have to preload
* the entire sound into memory before playing it. It is a command-line
* application with no gui. It includes code to convert ULAW and ALAW
* audio formats to PCM so they can be played. Use the -m command-line option
* before MIDI files.
*/
public class PlaySoundStream {
// Create a URL from the command-line argument and pass it to the
// right static method depending on the presence of the -m (MIDI) option.
public static void main(String[ ] args) throws Exception {
if (args[0].equals("-m")) streamMidiSequence(new URL(args[1]));
else streamSampledAudio(new URL(args[0]));
// Exit explicitly.
// This is needed because the audio system starts background threads.
System.exit(0);
}
/** Read sampled audio data from the specified URL and play it */
public static void streamSampledAudio(URL url)
throws IOException, UnsupportedAudioFileException,
LineUnavailableException
{
AudioInputStream ain = null; // We read audio data from here
SourceDataLine line = null; // And write it here.
try {
// Get an audio input stream from the URL
ain=AudioSystem.getAudioInputStream(url);
// Get information about the format of the stream
AudioFormat format = ain.getFormat( );
DataLine.Info info=new DataLine.Info(SourceDataLine.class,format);
// If the format is not supported directly (i.e. if it is not PCM
// encoded), then try to transcode it to PCM.
if (!AudioSystem.isLineSupported(info)) {
// This is the PCM format we want to transcode to.
// The parameters here are audio format details that you
// shouldn't need to understand for casual use.
AudioFormat pcm =
new AudioFormat(format.getSampleRate( ), 16,
format.getChannels( ), true, false);
// Get a wrapper stream around the input stream that does the
// transcoding for us.
ain = AudioSystem.getAudioInputStream(pcm, ain);
// Update the format and info variables for the transcoded data
format = ain.getFormat( );
info = new DataLine.Info(SourceDataLine.class, format);
}
// Open the line through which we'll play the streaming audio.
line = (SourceDataLine) AudioSystem.getLine(info);
line.open(format);
// Allocate a buffer for reading from the input stream and writing
// to the line. Make it large enough to hold 4k audio frames.
// Note that the SourceDataLine also has its own internal buffer.
int framesize = format.getFrameSize( );
byte[ ] buffer = new byte[4 * 1024 * framesize]; // the buffer
int numbytes = 0; // how many bytes
// We haven't started the line yet.
boolean started = false;
for(;;) { // We'll exit the loop when we reach the end of stream
// First, read some bytes from the input stream.
int bytesread=ain.read(buffer,numbytes,buffer.length-numbytes);
// If there were no more bytes to read, we're done.
if (bytesread == -1) break;
numbytes += bytesread;
// Now that we've got some audio data to write to the line,
// start the line, so it will play that data as we write it.
if (!started) {
line.start( );
started = true;
}
// We must write bytes to the line in an integer multiple of
// the framesize. So figure out how many bytes we'll write.
int bytestowrite = (numbytes/framesize)*framesize;
// Now write the bytes. The line will buffer them and play
// them. This call will block until all bytes are written.
line.write(buffer, 0, bytestowrite);
// If we didn't have an integer multiple of the frame size,
// then copy the remaining bytes to the start of the buffer.
int remaining = numbytes - bytestowrite;
if (remaining > 0)
System.arraycopy(buffer,bytestowrite,buffer,0,remaining);
numbytes = remaining;
}
// Now block until all buffered sound finishes playing.
line.drain( );
}
finally { // Always relinquish the resources we use
if (line != null) line.close( );
if (ain != null) ain.close( );
}
}
// A MIDI protocol constant that isn't defined by javax.sound.midi
public static final int END_OF_TRACK = 47;
/* MIDI or RMF data from the specified URL and play it */
public static void streamMidiSequence(URL url)
throws IOException, InvalidMidiDataException, MidiUnavailableException
{
Sequencer sequencer=null; // Converts a Sequence to MIDI events
Synthesizer synthesizer=null; // Plays notes in response to MIDI events
try {
// Create, open, and connect a Sequencer and Synthesizer
// They are closed in the finally block at the end of this method.
sequencer = MidiSystem.getSequencer( );
sequencer.open( );
synthesizer = MidiSystem.getSynthesizer( );
synthesizer.open( );
sequencer.getTransmitter( ).setReceiver(synthesizer.getReceiver( ));
// Specify the InputStream to stream the sequence from
sequencer.setSequence(url.openStream( ));
// This is an arbitrary object used with wait and notify to
// prevent the method from returning before the music finishes
final Object lock = new Object( );
// Register a listener to make the method exit when the stream is
// done. See Object.wait( ) and Object.notify( )
sequencer.addMetaEventListener(new MetaEventListener( ) {
public void meta(MetaMessage e) {
if (e.getType( ) == END_OF_TRACK) {
synchronized(lock) {
lock.notify( );
}
}
}
});
// Start playing the music
sequencer.start( );
// Now block until the listener above notifies us that we're done.
synchronized(lock) {
while(sequencer.isRunning( )) {
try { lock.wait( ); } catch(InterruptedException e) { }
}
}
}
finally {
// Always relinquish the sequencer, so others can use it.
if (sequencer != null) sequencer.close( );
if (synthesizer != null) synthesizer.close( );
}
}
}
I have used this piece of code in one of my projects that deal with Audio streaming and was working just fine.
Furthermore, you can see similar examples here:
Java Audio Example
Just reading the javadoc of AudioSystem give me an idea.
There is an other signature for getAudioInputStream: you can give it an InputStream instead of a URL.
So, try to manage to get the input stream by yourself and add the needed headers so that you get the stream instead the html content:
URLConnection uc = url.openConnection();
uc.setRequestProperty("<header name here>", "<header value here>");
InputStream in = uc.getInputStream();
ain=AudioSystem.getAudioInputStream(in);
Hope this help.
I know this answer comes late, but I had the same issue: I wanted to play MP3 and AAC audio and also wanted the user to insert PLS/M3U links. Here is what I did:
First I tried to parse the type by using the simple file name:
import de.webradio.enumerations.FileExtension;
import java.net.URL;
public class FileExtensionParser {
/**
*Parses a file extension
* #param filenameUrl the url
* #return the filename. if filename cannot be determined by file extension, Apache Tika parses by live detection
*/
public FileExtension parseFileExtension(URL filenameUrl) {
String filename = filenameUrl.toString();
if (filename.endsWith(".mp3")) {
return FileExtension.MP3;
} else if (filename.endsWith(".m3u") || filename.endsWith(".m3u8")) {
return FileExtension.M3U;
} else if (filename.endsWith(".aac")) {
return FileExtension.AAC;
} else if(filename.endsWith((".pls"))) {
return FileExtension.PLS;
}
URLTypeParser parser = new URLTypeParser();
return parser.parseByContentDetection(filenameUrl);
}
}
If that fails, I use Apache Tika to do a kind of live detection:
public class URLTypeParser {
/** This class uses Apache Tika to parse an URL using her content
*
* #param url the webstream url
* #return the detected file encoding: MP3, AAC or unsupported
*/
public FileExtension parseByContentDetection(URL url) {
try {
HttpURLConnection connection = (HttpURLConnection) url.openConnection();
InputStream in = connection.getInputStream();
BodyContentHandler handler = new BodyContentHandler();
AudioParser parser = new AudioParser();
Metadata metadata = new Metadata();
parser.parse(in, handler, metadata);
return parseMediaType(metadata);
} catch (IOException e) {
e.printStackTrace();
} catch (TikaException e) {
e.printStackTrace();
} catch (SAXException e) {
e.printStackTrace();
}
return FileExtension.UNSUPPORTED_TYPE;
}
private FileExtension parseMediaType(Metadata metadata) {
String parsedMediaType = metadata.get("encoding");
if (parsedMediaType.equalsIgnoreCase("aac")) {
return FileExtension.AAC;
} else if (parsedMediaType.equalsIgnoreCase("mpeg1l3")) {
return FileExtension.MP3;
}
return FileExtension.UNSUPPORTED_TYPE;
}
}
This will also solve the HTML problem, since the method will return FileExtension.UNSUPPORTED for HTML content.
I combined this classes together with a factory pattern and it works fine. The live detection takes only about two seconds.
I don't think that this will help you anymore but since I struggled almost three weeks I wanted to provide a working answer. You can see the whole project at github: https://github.com/Seppl2202/webradio
I would like to get the metadata from an image file in my local system using Java code
In the attached image you can see the desired data which i would like to pull from java code.
I wrote the below code and do not seem pull the data mentioned in the "Details" tab. The below code's output is and this is not what I look for.
Started ..
Format name: javax_imageio_jpeg_image_1.0
Format name: javax_imageio_1.0
Please give me your ideas. Thanks
try {
ImageInputStream inStream = ImageIO.createImageInputStream(new File("D:\\codeTest\\arun.jpg"));
Iterator<ImageReader> imgItr = ImageIO.getImageReaders(inStream);
while (imgItr.hasNext()) {
ImageReader reader = imgItr.next();
reader.setInput(inStream, true);
IIOMetadata metadata = reader.getImageMetadata(0);
String[] names = metadata.getMetadataFormatNames();
int length = names.length;
for (int i = 0; i < length; i++) {
System.out.println( "Format name: " + names[ i ] );
}
}
} catch (IOException e) {
e.printStackTrace();
}
There's no easy way to do it with the Java Core API. You'd have to parse the image's metadata tree, and interpret the proper EXIF tags. Instead, you can pick up the required code from an existing library with EXIF-parsing capabilities, and use it in yours. For example, I have used the Image class of javaxt, which provides a very useful method to extract GPS metadata from an image. It is as simple as:
javaxt.io.Image image = new javaxt.io.Image("D:\\codeTest\\arun.jpg");
double[] gps = image.getGPSCoordinate();
Plus, javaxt.io.Image has no external dependencies, so you can just use that particular class if you don't want to add a dependency on the entire library.
I suggest you read the EXIF header of the image and then parse the tags for finding the GPS information. In Java there is a great library (called metadata-extractor) for extracting and parsing the EXIF header. Please see the getting started for this library here.
Once you do the first 2 steps in the tutorial, look for the tags starting with [GPS] ([GPS] GPS Longitude, [GPS] GPS Latitude, ...).
Based on #dan-d answer, here is my code (kotlin)
private fun readGps(file: String): Optional<GeoLocation> {
// Read all metadata from the image
// Read all metadata from the image
val metadata: Metadata = ImageMetadataReader.readMetadata(File(file))
// See whether it has GPS data
val gpsDirectories = metadata.getDirectoriesOfType(
GpsDirectory::class.java)
for (gpsDirectory in gpsDirectories) {
// Try to read out the location, making sure it's non-zero
val geoLocation = gpsDirectory.geoLocation
if (geoLocation != null && !geoLocation.isZero) {
return Optional.of(geoLocation)
}
}
return Optional.empty()
}
I am getting Java Heap Space Error while writing large data from database to an excel sheet.
I dont want to use JVM -XMX options to increase memory.
Following are the details:
1) I am using org.apache.poi.hssf api
for excel sheet writing.
2) JDK version 1.5
3) Tomcat 6.0
Code i have wriiten works well for around 23 thousand records, but it fails for more than 23K records.
Following is the code:
ArrayList l_objAllTBMList= new ArrayList();
l_objAllTBMList = (ArrayList) m_objFreqCvrgDAO.fetchAllTBMUsers(p_strUserTerritoryId);
ArrayList l_objDocList = new ArrayList();
m_objTotalDocDtlsInDVL= new HashMap();
Object l_objTBMRecord[] = null;
Object l_objVstdDocRecord[] = null;
int l_intDocLstSize=0;
VisitedDoctorsVO l_objVisitedDoctorsVO=null;
int l_tbmListSize=l_objAllTBMList.size();
System.out.println(" getMissedDocDtlsList_NSM ");
for(int i=0; i<l_tbmListSize;i++)
{
l_objTBMRecord = (Object[]) l_objAllTBMList.get(i);
l_objDocList = (ArrayList) m_objGenerateVisitdDocsReportDAO.fetchAllDocDtlsInDVL_NSM((String) l_objTBMRecord[1], p_divCode, (String) l_objTBMRecord[2], p_startDt, p_endDt, p_planType, p_LMSValue, p_CycleId, p_finYrId);
l_intDocLstSize=l_objDocList.size();
try {
l_objVOFactoryForDoctors = new VOFactory(l_intDocLstSize, VisitedDoctorsVO.class);
/* Factory class written to create and maintain limited no of Value Objects (VOs)*/
} catch (ClassNotFoundException ex) {
m_objLogger.debug("DEBUG:getMissedDocDtlsList_NSM :Exception:"+ex);
} catch (InstantiationException ex) {
m_objLogger.debug("DEBUG:getMissedDocDtlsList_NSM :Exception:"+ex);
} catch (IllegalAccessException ex) {
m_objLogger.debug("DEBUG:getMissedDocDtlsList_NSM :Exception:"+ex);
}
for(int j=0; j<l_intDocLstSize;j++)
{
l_objVstdDocRecord = (Object[]) l_objDocList.get(j);
l_objVisitedDoctorsVO = (VisitedDoctorsVO) l_objVOFactoryForDoctors.getVo();
if (((String) l_objVstdDocRecord[6]).equalsIgnoreCase("-"))
{
if (String.valueOf(l_objVstdDocRecord[2]) != "null")
{
l_objVisitedDoctorsVO.setPotential_score(String.valueOf(l_objVstdDocRecord[2]));
l_objVisitedDoctorsVO.setEmpcode((String) l_objTBMRecord[1]);
l_objVisitedDoctorsVO.setEmpname((String) l_objTBMRecord[0]);
l_objVisitedDoctorsVO.setDoctorid((String) l_objVstdDocRecord[1]);
l_objVisitedDoctorsVO.setDr_name((String) l_objVstdDocRecord[4] + " " + (String) l_objVstdDocRecord[5]);
l_objVisitedDoctorsVO.setDoctor_potential((String) l_objVstdDocRecord[3]);
l_objVisitedDoctorsVO.setSpeciality((String) l_objVstdDocRecord[7]);
l_objVisitedDoctorsVO.setActualpractice((String) l_objVstdDocRecord[8]);
l_objVisitedDoctorsVO.setLastmet("-");
l_objVisitedDoctorsVO.setPreviousmet("-");
m_objTotalDocDtlsInDVL.put((String) l_objVstdDocRecord[1], l_objVisitedDoctorsVO);
}
}
}// End of While
writeExcelSheet(); // Pasting this method at the end
// Clean up code
l_objVOFactoryForDoctors.resetFactory();
m_objTotalDocDtlsInDVL.clear();// Clear the used map
l_objDocList=null;
l_objTBMRecord=null;
l_objVstdDocRecord=null;
}// End of While
l_objAllTBMList=null;
m_objTotalDocDtlsInDVL=null;
-------------------------------------------------------------------
private void writeExcelSheet() throws IOException
{
HSSFRow l_objRow = null;
HSSFCell l_objCell = null;
VisitedDoctorsVO l_objVisitedDoctorsVO = null;
Iterator l_itrDocMap = m_objTotalDocDtlsInDVL.keySet().iterator();
while (l_itrDocMap.hasNext())
{
Object key = l_itrDocMap.next();
l_objVisitedDoctorsVO = (VisitedDoctorsVO) m_objTotalDocDtlsInDVL.get(key);
l_objRow = m_objSheet.createRow(m_iRowCount++);
l_objCell = l_objRow.createCell(0);
l_objCell.setCellStyle(m_objCellStyle4);
l_objCell.setCellValue(String.valueOf(l_intSrNo++));
l_objCell = l_objRow.createCell(1);
l_objCell.setCellStyle(m_objCellStyle4);
l_objCell.setCellValue(l_objVisitedDoctorsVO.getEmpname() + " (" + l_objVisitedDoctorsVO.getEmpcode() + ")"); // TBM Name
l_objCell = l_objRow.createCell(2);
l_objCell.setCellStyle(m_objCellStyle4);
l_objCell.setCellValue(l_objVisitedDoctorsVO.getDr_name());// Doc Name
l_objCell = l_objRow.createCell(3);
l_objCell.setCellStyle(m_objCellStyle4);
l_objCell.setCellValue(l_objVisitedDoctorsVO.getPotential_score());// Freq potential score
l_objCell = l_objRow.createCell(4);
l_objCell.setCellStyle(m_objCellStyle4);
l_objCell.setCellValue(l_objVisitedDoctorsVO.getDoctor_potential());// Freq potential score
l_objCell = l_objRow.createCell(5);
l_objCell.setCellStyle(m_objCellStyle4);
l_objCell.setCellValue(l_objVisitedDoctorsVO.getSpeciality());//CP_GP_SPL
l_objCell = l_objRow.createCell(6);
l_objCell.setCellStyle(m_objCellStyle4);
l_objCell.setCellValue(l_objVisitedDoctorsVO.getActualpractice());// Actual practise
l_objCell = l_objRow.createCell(7);
l_objCell.setCellStyle(m_objCellStyle4);
l_objCell.setCellValue(l_objVisitedDoctorsVO.getPreviousmet());// Lastmet
l_objCell = l_objRow.createCell(8);
l_objCell.setCellStyle(m_objCellStyle4);
l_objCell.setCellValue(l_objVisitedDoctorsVO.getLastmet());// Previousmet
}
// Write OutPut Stream
try {
out = new FileOutputStream(m_objFile);
outBf = new BufferedOutputStream(out);
m_objWorkBook.write(outBf);
} catch (Exception ioe) {
ioe.printStackTrace();
System.out.println(" Exception in chunk write");
} finally {
if (outBf != null) {
outBf.flush();
outBf.close();
out.close();
l_objRow=null;
l_objCell=null;
}
}
}
Instead of populating the complete list in memory before starting to write to excel you need to modify the code to work in such a way that each object is written to a file as it is read from the database. Take a look at this question to get some idea of the other approach.
Well, I'm not sure if POI can handle incremental updates but if so you might want to write chunks of say 10000 Rows to the file. If not, you might have to use CSV instead (so no formatting) or increase memory.
The problem is that you need to make objects written to the file elligible for garbage collection (no references from a live thread anymore) before writing the file is finished (before all rows have been generated and written to the file).
Edit:
If can you write smaller chunks of data to the file you'd also have to only load the necessary chunks from the db. So it doesn't make sense to load 50000 records at once and then try and write 5 chunks of 10000, since those 50000 records are likely to consume a lot of memory already.
As Thomas points out, you have too many objects taking up too much space, and need a way to reduce that. There is a couple of strategies for this I can think of:
Do you need to create a new factory each time in the loop, or can you reuse it?
Can you start with a loop getting the information you need into a new structure, and then discarding the old one?
Can you split the processing into a thread chain, sending information forwards to the next step, avoiding building a large memory consuming structure at all?