I'm trying to access the transactions contained in the blocks I have downloaded but none of the blocks have any transactions; the size of every Transaction list returned is zero. Am I conceptually misunderstanding something about the bitcoin blockchain or is there something wrong with my code?
static NetworkParameters params = MainNetParams.get();
static WalletAppKit kit = new WalletAppKit(params, new java.io.File("."), "chain");
/* store_TX() gets Transactions from blocks and stores them in a file */
static protected void store_TX() throws BlockStoreException, FileNotFoundException, UnsupportedEncodingException{
File txf = new File("TX.txt");
PrintWriter hwriter = new PrintWriter("TX.txt", "UTF-8");
BlockChain chain = kit.chain();
BlockStore block_store = chain.getBlockStore();
StoredBlock stored_block = block_store.getChainHead();
// if stored_block.prev() returns null then break otherwise get block transactions
while (stored_block!=null){
Block block = stored_block.getHeader();
List<Transaction> tx_list = block.getTransactions();
if (tx_list != null && tx_list.size() > 0){
hwriter.println(block.getHashAsString());
}
stored_block = stored_block.getPrev(block_store);
}
hwriter.close();
}
public static void main(String[] args){
BriefLogFormatter.init();
synchronized(kit.startAndWait()){
try {
store_TX();
} catch (FileNotFoundException e) {
e.printStackTrace();
} catch (UnsupportedEncodingException e) {
e.printStackTrace();
} catch (BlockStoreException e) {
e.printStackTrace();
}
}
} //end main
you need to use FullPrunedBlockChain, the blockchain only supports SPV.
See https://bitcoinj.github.io/full-verification
It depends on how you downloaded those Blocks. If you downloaded them for example via the BlocksDownloadedEventListener then you only received the Blockheaders which do not contain the Transactions. If you want to get the Transactions too you can use Peer.getBlock(blockHash) to request a download of the full Block from that Peer, which will also contain the Transactions and information related to them. (i.e. Blockreward)
Also you would need to use another type of BlockStore for persisting your Blocks as the SPVBlockstore (which is Standard for WalletAppKit) only saves Blockheaders (so no Transactions).
You can find all types of Blockstores here, so you can choose what suits you best, but always read the description on what they are saving, for not running into that problem again.
Related
I have been frustrated for almost 2 days trying to find the flaw because the registry.delete() method does not delete the file "Registry.txt". I'm working with a GUI, and every time I click on a row of a JTable and then click on the "Ban" button, it does not delete the file "Registry.txt", and it does not write either! However, if I do it from another class, like the class that has the main() method, it clears properly. What I wanted to do is delete a line from the Registry.txt, writing in another .txt file all of the lines that did not contain a certain String name, and then rename it to the name Registry.txt. I do not know what is happening. Below is my code:
ActionListener ban = new ActionListener() {
#Override
public void actionPerformed(ActionEvent e) {
int fila = table.getSelectedRow();
String nombre = (String) modelo.getValueAt(fila, 0);
modelo.removeRow(fila);
try {
removeUser(nombre);
} catch (IOException ex) {
System.out.println(ex.getMessage());
}
}
};
btnBanear.addActionListener(ban);
...
public void removeUser(String nombre) throws IOException {
String lee = null;
String usuario = "";
CharSequence aux = nombre;
try {
registro = new File("Registro.txt");
tempFile = new File("Registro1.txt");
lector = new BufferedReader(new FileReader(registro));
fw = new FileWriter(tempFile);
writer = new BufferedWriter(fw);
} catch (FileNotFoundException ex) {
System.out.println(ex.getMessage());
} catch (IOException ex) {
System.out.println(ex.getMessage());
} catch (NullPointerException e) {
System.out.println(e.getMessage());
}
while ((lee = lector.readLine()) != null) {
System.out.println(aux);
if (lee.contains(nombre)) {
continue;
} else {
writer.write(lee);
}
}
lector.close();
fw.close();
writer.close();
registro.delete();
}
I don't think we have quite enough data to see exactly why this is happening, but I can give you some strong suspicions.
File.delete() throws four possible exceptions; three of which apply here. NoSuchFileException (worth specifically checking for), IOException, and SecurityException. NoSuchFileException is derived from IOException and would be caught anyway; though you might want to catch for it still as casting it to an IOException is going to remove relevant data. SecurityException is generally when a security manager gets in the way, which happens all the time on web-based programs. I'm wondering if your file is some kind of an applet or its modern equivalent, the web application? SecurityException is a RuntimeException, so you don't have to catch for it, but you can and probably should. That would explain worlds.
Lastly, you can also use File.deleteIfExists(). This returns a value of true if the file was actually present to be deleted, and false if it could not be found. Worth looking into, because if your path is getting skewed and the file can't be found at the provided location, then it won't be deleted. It's reasonable to think that your program might have a different working directory than you're thinking it does. This is more or less the same as checking for NoSuchFileException, though.
You might even check your working directory, to be sure, with System.out.println(System.getProperty("user.dir")).
My money is on SecurityException or the path being wrong. However, if I'm wrong about this, it would be nice if you could show us your other button code, because there could be a relevant problem there, too.
I read file and create a Object from it and store to postgresql database. My file has 100,000 document that I read from one file and split it and finally store to database.
I can't create List<> and store all document in List<> because my RAM is little. My code to read and write to database are as below. But My JVM Heap fills and can not continue to store more document. How to read file and store to database efficiently.
public void readFile() {
StringBuilder wholeDocument = new StringBuilder();
try {
bufferedReader = new BufferedReader(new FileReader(files));
String line;
int count = 0;
while ((line = bufferedReader.readLine()) != null) {
if (line.contains("<page>")) {
wholeDocument.append(line);
while ((line = bufferedReader.readLine()) != null) {
wholeDocument = wholeDocument.append("\n" + line);
if (line.contains("</page>")) {
System.out.println(count++);
addBodyToDatabase(wholeDocument.toString());
wholeDocument.setLength(0);
break;
}
}
}
}
wikiParser.commit();
} catch (FileNotFoundException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
} finally {
try {
bufferedReader.close();
} catch (IOException e) {
e.printStackTrace();
}
}
}
public void addBodyToDatabase(String wholeContent) {
Page page = new Page(new Timestamp(System.currentTimeMillis()),
wholeContent);
database.addPageToDatabase(page);
}
public static int counter = 1;
public void addPageToDatabase(Page page) {
session.save(page);
if (counter % 3000 == 0) {
commit();
}
counter++;
}
First of all you should apply a fork-join approach here.
The main task parses the file and sends batches of at most 100 items to an ExecutorService. The ExecutorService should have a number of worker threads that equals the number of available database connections. If you have 4 CPU cores, let's say that the database can take 8 concurrent connections without doing to much context switching.
You should then configure a connection pooling DataSource and have a minSize equal to maxSize and equal to 8. Try HikariCP or ViburDBCP for connection pooling.
Then you need to configure JDBC batching. If you're using MySQL, the IDENTITY generator will disable bathing. If you're using a database that supports sequences, make sure you also use the enhanced identifier generators (they are the default option in Hibernate 5.x).
This way the entity insert process is parallelized and decoupled from the main parsing thread. The main thread should wait for the ExecutorService to finish processing all tasks prior to shutting down.
Actually it is hard to suggest to you without doing real profiling and find out what's making your code slow or inefficient.
However there are several things we can see from your code
You are using StringBuilder inefficiently
wholeDocument.append("\n" + line); should be wrote as wholeDocument.append("\n").append(line); instead
Because what you original wrote will be translated by compiler to
whileDocument.append(new StringBuilder("\n").append(line).toString()). You can see how much unnecessary StringBuilders you have created :)
Consideration in using Hibernate
I am not sure how you manage your session or how you implemented your commit(), I assume you have done it right, there are still more thing to consider:
Have you properly set up batch size in Hibernate? (hibernate.jdbc.batch_size) By default, the JDBC batch size is something around 5. You may want to make sure you set it in bigger size (so that internally Hibernate will send inserts in a bigger batch).
Given that you do not need the entities in 1st level cache for later use, you may want to do intermittent session flush() + clear() to
Trigger batch inserts mentioned in previous point
clear out first level cache
Switch away from Hibernate for this feature.
Hibernate is cool but it is not panacea for everything. Given that in this feature you are just saving records into DB based on text file content. Neither you do need any entity behavior, nor you need to make use of first level cache for later processing, there is not much reason to make use of Hibernate here given the extra processing and space overhead. Simply doing JDBC with manual batch handling is going to save you a lot of trouble .
I use #RookieGuy answer.
stackoverflow.com/questions/14581865/hibernate-commit-and-flush
I use
session.flush();
session.clear();
and finally after read all documents and store them into database
tx.commit();
session.close();
and change
wholeDocument = wholeDocument.append("\n" + line);
to
wholeDocument.append("\n" + line);
I'm not very much sure about the structure of your data file.It will be easy to understand, if you could provide a sample of your file.
The root cause of the memory consumption is the way of reading/iterating the file. Once something get read, stays in memory. You should rather use either java.io.FileInputStream or org.apache.commons.io.FileUtils.
Here is a sample code to iterate with java.io.FileInputStream
try (
FileInputStream inputStream = new FileInputStream("/tmp/sample.txt");
Scanner sc = new Scanner(inputStream, "UTF-8")
) {
while (sc.hasNextLine()) {
String line = sc.nextLine();
addBodyToDatabase(line);
}
} catch (FileNotFoundException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
Here is a sample code to iterate with org.apache.commons.io.FileUtils
File file = new File("/tmp/sample.txt");
LineIterator it = FileUtils.lineIterator(file, "UTF-8");
try {
while (it.hasNext()) {
String line = it.nextLine();
addBodyToDatabase(line);
}
} finally {
LineIterator.closeQuietly(it);
}
You should begin a transaction, do the save operation and commit a transaction. (Don't begin a transaction after save!). You can try to use StatelessSession to exclude memory consumption by a cache.
And use more less value, for an example 20, in this code
if (counter % 20 == 0)
You can try to pass StringBuilder as a method's argument as far as possible.
In my application I have implemented a method to get favourits of particular user. If the user is a new one there will not be a entry in the table.If so I add default favourtis to the table. Code is shown below.
public String getUserFavourits(String username) {
String s = "SELECT FAVOURITS FROM USERFAVOURITS WHERE USERID='" +
username.trim() + "'";
String a = "";
Statement stm = null;
ResultSet reset = null;
DatabaseConnectionHandler handler = null;
Connection conn = null;
try {
handler = DatabaseConnectionHandler.getInstance();
conn = handler.getConnection();
stm = conn.createStatement(ResultSet.TYPE_SCROLL_SENSITIVE,ResultSet.CONCUR_UPDATABLE);
reset = stm.executeQuery(s);
if (reset.next()) {
a = reset.getString("FAVOURITS").toString();
}
reset.close();
stm.close();
}
catch (SQLException ex) {
ex.printStackTrace();
}
catch (Exception ex) {
ex.printStackTrace();
}
finally {
try {
handler.returnConnectionToPool(conn);
if (stm != null) {
stm.close();
}
if (reset != null) {
reset.close();
}
}catch (Exception ex) {
ex.printStackTrace();
}
}
if (a.equalsIgnoreCase("")) {
a = updateNewUserFav(username);
}
return a;
}
You can see that after the Finally block updateNewUserFav(username) method is use to insert default favourits in to table. Normally users are forced to change this in their first login.
My problem is many users have complain me about they hava lost their customized favourits and default has get loaded in their login. When I go through the code I notice that it can only happen if exception occured in the try block. When I debug code works fine. Is this can be coused at time when DB is busy?
Normally there are more than 1000 concurrent user in the system. Since it is real time application there will be huge number a of request comming to the Database(DB is Oracle).
Can some one pls explain.
Firstly, use jonearles suggestion about bind variables. If a lot of your code is like this, with 1000 concurrent users, I'd hate to think what performance is like.
Secondly, if it is busy then there is a chance of time-outs. As you say, if an exception is encountered then it falls back to the "updateNewUserFav"
Really, it should only call that if NO exception is raised.
If an exception is raised, the function should fail. The current code is similar to
"TURN THE IGNITION KEY TO START THE CAR"
"IF THERE IS A PROBLEM, RING GARAGE AND BOOK APPOINTMENT"
"PUT CAR INTO GEAR AND RELEASE HAND_BRAKE"
You really only want to release the hand-brake once the car has successfully started, otherwise you'll end up rolling down the hill until the sudden stop at the end (often involving an expensive CRUNCH sound).
Let's say I can a set of statements:
try {
String a = getProperty("a");
String b = getProperty("b");
String c = getProperty("c");
} catch(Exception e) {
}
Now, lets say property b was not found and the function throws an exception. In this case, how would I just continue or perhaps set b to null without having to write a try-catch block for each property? I mean, a,b,c exist but sometime they might not be found at all during which an exception is thrown.
Assuming you can't change the function so that it returns null when the property isn't found, you are kind of stuck wrapping everything in its own try catch block -- especially if you want for every value that can be retrieved to be retrieved (as opposed to letting the first value that fails cancel the whole operation.)
If you have a lot of these properties to retrieve, perhaps it would be cleaner to write a helper method to use:
String getPropertySafely(String key) {
try {
return getProperty(key);
} catch (Exception e) {
return null;
}
}
You have to put a try-catch around each statement. There is no continue (like there is in ON ERROR ... RESUME blocks in VB). Instead of:
String a = null;
try {
a = getProperty("a");
} catch(Exception e) {
...
}
String b = null;
try {
b = getProperty("b");
} catch(Exception e) {
...
}
String c = null;
try {
c = getProperty("c");
} catch(Exception e) {
...
}
you could write:
public String getPropertyNoException(String name) {
try {
return getProperty(name);
} catch (Exception e) {
return null;
}
}
Personally I think a getProperty() is a poor candidate for throwing exceptions just for all this extra boilerplate required
Since you are using the same function each time you might be able to put this in a loop:
String[] abc = new String[3];
String[] param = {"a", "b", "c"};
for (int i = 0; i < 3; i++) {
try {
abc[i] = getProperty(param[i]);
} catch(Exception e) {
}
}
but this is rather contrived and would only be useful for a large number of properties. I suspect you will have to simple write 3 try-catch.
You should reconsider how getProperty is handled if you plan to use many of them because there isn't a plain way to do it.
You can exploit finally statement but you still need a try-catch for every call.
So I have a an application which is running on top of gridgain and does so quite successfully for about 12-24 hours of stress testing before it starts to act funny. After this period of time the application will suddenly start replying to all queries with the exception java.nio.channels.ClosedByInterruptException (full stack trace is at http://pastie.org/664717
The method that is failing from is (edited to use #stephenc feedback)
public static com.vlc.edge.FileChannel createChannel(final File file) {
FileChannel channel = null;
try {
channel = new FileInputStream(file).getChannel();
channel.position(0);
final com.vlc.edge.FileChannel fileChannel = new FileChannelImpl(channel);
channel = null;
return fileChannel;
} catch (FileNotFoundException e) {
throw new VlcRuntimeException("Failed to open file: " + file, e);
} catch (IOException e) {
throw new VlcRuntimeException(e);
} finally {
if (channel != null) {
try {
channel.close();
} catch (IOException e){
// noop
LOGGER.error("There was a problem closing the file: " + file);
}
}
}
}
and the calling function correctly closes the object
private void fillContactBuffer(final File signFile) {
contactBuffer = ByteBuffer.allocate((int) signFile.length());
final FileChannel channel = FileUtils.createChannel(signFile);
try {
channel.read(contactBuffer);
} finally {
channel.close();
}
contactBuffer.rewind();
}
The application basically serves as a distributed file parser so it does a lot of these types of operations (will typically open about 10 such channels per query per node). It seems that after a certain period it stops being able to open files and I'm at a loss to explain why this could be happening and would greatly appreciate any one who can tell me what could be causing this and how I could go about tracking it down and fixing it. If it is possibly related to file handle exhaustion, I'd love to hear any tips for finding out for sure... i.e. querying the JVM while it's running or using linux command line tools to find out more information about what handles are currently open.
update: I've been using command line tools to interrogate the output of lsof and haven't been able to see any evidence that file handles are being held open... each node in the grid has a very stable profile of openned files which I can see changing as the above code is executed... but it always returns to a stable number of open files.
Related to this question: Freeing java file handles
There are a couple of scenarios where file handles might not be being closed:
There might be some other code that opens files.
There might be some other bit of code that calls createChannel(...) and doesn't call fillContactBuffer(...)
If channel.position(0) throws an exception, the channel won't be closed. The fix is to rearrange the code so that the following statements are inside the try block.
channel.position(0);
return new FileChannelImpl(channel);
EDIT: Looking at the stack trace, it seems that the two methods are in different code-bases. I'd point the finger of blame at the createChannel method. It is potentially leaky, even if it is not the source of your problems. It needs an in internal finally clause to make sure that the channel is closed in the event of an exception.
Something like this should do the trick. Note that you need to make sure that the finally block does not closes the channel on success!
public static com.vlc.edge.FileChannel createChannel(final File file) {
final FileChannel channel = null;
try {
channel = new FileInputStream(file).getChannel();
channel.position(0);
FileChannel res = new FileChannelImpl(channel);
channel = null;
return res;
} catch (FileNotFoundException e) {
throw new VlcRuntimeException("Failed to open file: " + file, e);
} catch (IOException e) {
throw new VlcRuntimeException(e);
} finally {
if (channel != null) {
try {
channel.close();
} catch (...) {
...
}
}
}
}
FOLLOWUP much later
Given that file handle leakage has been eliminated as a possible cause, my next theory would be that the server side is actually interrupting its own threads using Thread.interrupt(). Some low-level I/O calls respond to an interrupt by throwing an exception, and the root exception being thrown here looks like one such exception.
This doesn't explain why this is happening, but at a wild guess I'd say that it was the server-side framework trying to resolve an overload or deadlock problem.