I need to include about 1 MByte of data in a Java application, for very fast and easy access in the rest of the source code. My main background is not Java, so my initial idea was to convert the data directly to Java source code, defining 1MByte of constant arrays, classes (instead of C++ struct) etc., something like this:
public final/immutable/const MyClass MyList[] = {
{ 23012, 22, "Hamburger"} ,
{ 28375, 123, "Kieler"}
};
However, it seems that Java does not support such constructs. Is this correct? If yes, what is the best solution to this problem?
NOTE: The data consists of 2 tables with each about 50000 records of data, which is to be searched in various ways. This may require some indexes later, with significant more records, maybe 1 million records, saved this way. I expect the application to start up very fast, without iterating through these records.
I personally wouldn't put it in source form.
Instead, include the data in some appropriate raw format in your jar file (I'm assuming you'll be packaging the application or library up) and use Class.getResourceAsStream or ClassLoader.getResourceAsStream to load it.
You may very well want a class to encapsulate loading, caching and providing this data - but I don't see much benefit from converting it into source code.
Due to limitations of the java bytecode files, class-files can not be larger than 64k iirc. (They are simply not intended for this type of data.)
I would load the data upon starting the program, using something like the following lines of code:
import java.io.*;
import java.util.*;
public class Test {
public static void main(String... args) throws IOException {
List<DataRecord> records = new ArrayList<DataRecord>();
BufferedReader br = new BufferedReader(new FileReader("data.txt"));
String s;
while ((s = br.readLine()) != null) {
String[] arr = s.split(" ");
int i = Integer.parseInt(arr[0]);
int j = Integer.parseInt(arr[1]);
records.add(new DataRecord(i, j, arr[0]));
}
}
}
class DataRecord {
public final int i, j;
public final String s;
public DataRecord(int i, int j, String s) {
this.i = i;
this.j = j;
this.s = s;
}
}
(NB: The Scanner is quite slow, so don't be tempted to use it just because it has a simple interface. Stick with some form of BufferedReader and split, or StringTokenizer.)
Efficiency can of course be improved if you transform the data into a binary format. In that case, you can make use of the DataInputStream (but don't forget to go through some BufferedInputStream or BufferedReader)
Depending on how you wish to access the data, you might be better off storing the records in a hash-map (HashMap<Integer, DataRecord>) (having i or j as the key).
If you wish to load the data at the same time as the JVM loads the class file itself (roughly!) you could do the read / initialization, not within a method, but ecapsulated in static { ... }.
For a memory-mapped approach, have a look at the java.nio.channels-package in java. Especially the method
public abstract MappedByteBuffer map(FileChannel.MapMode mode, long position,long size) throws IOException
Complete code examples can be found here.
Dan Bornstein (the lead developer of DalvikVM) explains a solution to your problem in this talk (Look around 0:30:00). However I doubt the solution applies to as much data as a megabyte.
An idea is that you use enumerators, but I'm not sure if this suits to your implementation, and it also depends on how you are planning to use the data.
public enum Stuff {
HAMBURGER (23012, 22),
KIELER (28375, 123);
private int a;
private int b;
//private instantiation, does not need to be called explicitly.
private Stuff(int a, int b) {
this.a = a;
this.b = b;
}
public int getAvalue() {
return this.a;
}
public int getBvalue() {
return this.b;
}
}
These can then be accessed like:
Stuff someThing = Stuff.HAMBURGER;
int hamburgerA = Stuff.HAMBURGER.getA() // = 23012
Another idea is using a static initializer to set private fields of a class.
Putting the data into source could would actually not be the fastest solution, not by a long shot. Loading a Java class is quite complex and slow (at least on a platform that does bytecode verification, not sure about Android).
The fastest possible way to do this would be to define your own binary index format. You could then read that as a byte[] (possibly using memory mapping) or even a RandomAccessFile without interpreting it in any way until you start accessing it. The cost of this would be the complexity of the code that accesses it. With fixed-size records, a sorted list of records that's accessed via binary search would still be pretty simple, but anything else is going to get ugly.
Though before doing that, are you sure this isn't premature optimization? The easiest (and probably still quite fast) solution would be to jsut serialize a Map, List or array - have you tried this and determined that it is, in fact, too slow?
convert the data directly to Java source code, defining 1MByte of constant arrays, classes
Be aware that there are strict constraints on the size of classes and their structures [ref JVM Spec.
This is how you define it in Java, if I understood what you are after:
public final Object[][] myList = {
{ 23012, 22, "Hamburger"} ,
{ 28375, 123, "Kieler"}
};
It looks like you plan to write your own lightweight database.
If you can limit the length of the String to a realistic max size the following might work:
write each entry into a binary file, the entries have the same size, so you waste some bytes with each entry(int a, int b,int stringsize, string, padding)
To read an entry open the file as a random access file, multiply the index with the length of an entry to get the offset and seek the position.
Put the bytes into a bytebuffer and read the values, the String has to be converted with the String(byte[] ,int start, int length,Charset) ctor.
If you can't limit the length of a block dump the strings in an additional file and only store the offsets in your table. This requires an additional file access and makes modifiying the data hard.
Some informationa about random file-access in java can be found here http://java.sun.com/docs/books/tutorial/essential/io/rafs.html.
For faster access you can cache some of your read entries in a Hashmap and always remove the oldest from the map when reading a new one.
Pseudo code (wont compile):
class MyDataStore
{
FileChannel fc = null;
Map<Integer,Entry> mychace = new HashMap<Integer, Entry>();
int chaceSize = 50000;
ArrayList<Integer> queue = new ArrayList();
static final int entryLength = 100;//byte
void open(File f)throws Exception{fc = f.newByteChannel()}
void close()throws Exception{fc.close();fc = null;}
Entry getEntryAt(int index)
{
if(mychace.contains(index))return mychace.get(index);
long pos = index * entryLength; fc.seek(pos);ByteBuffer
b = new ByteBuffer(100);
fc.read(b);
Entry a = new Entry(b);
queue.add(index);
mychace.put(index,a);
if(queue.size()>chacesize)mychace.remove(queue.remove(0));
return a;
}
}
class Entry{
int a; int b; String s;
public Entry(Bytebuffer bb)
{
a = bb.getInt();
b = bb.getInt();
int size = bb.getInt();
byte[] bin = new byte[size];
bb.get(bin);
s = new String(bin);
}
}
Missing from the pseudocode:
writing, since you need it for constant data
total number of entries/sizeof file, only needs an additional integer at the beginning of the file and an additional 4 byte offset for each access operation.
You could also declare a static class (or a set of static classes) exposing the desidered values as methods. After all, you want your code to be able to find the value for a given name, and don't want the value to change.
So: location=MyLibOfConstants.returnHamburgerLocation().zipcode
And you can store this stuff in a hashtable with lazyinitialization, if you thing that calculating it on the fly would be a waste of time.
Isn't a cache what you need?
As classes it is loaded in the memory, not really limited to a defined size, should be as fast as using constants...
Actually it can even search data with some kind of indexes (exemple with the object hashcode...)
You can for exemple create all your data arrays (ex { 23012, 22, "Hamburger"}) and then create 3 hashmap:
map1.put(23012,hamburgerItem);
map2.put(22,hamburgerItem);
map3.put("Hamburger",hamburgerItem);
This way you can search very fast in one of the map according to the parameter you have...
(but this works only if your keys are unique in the map... this is just an exemple that could inspire you)
At work we have a very big webapp (80 weblogic instances) and it's almost what we do: caching everywhere. From a countrylist in database, create a cache...
There are many different kind of caches, you should check the link and choose what you need...
http://en.wikipedia.org/wiki/Cache_algorithms
Java serialization sounds like something that needs to be parsed... not good. Isn't there some kind of standard format for storing data in a stream, that can be read/looked up using a standard API without parsing it?
If you were to create the data in code, then it would all be loaded on first use. This is unlikely to be much more efficient than loading from a separate file - as well as parsing the data in the class file, the JVM has to verify and compile the bytecodes to create each object a million times, rather than just the once if you load it from a loop.
If you want random access and can't use a memory mapped file, then there is a RandomAccessFile which might work. You need either to load a index on start, or you need to make the entries a fixed length.
You might want to check whether the HDF5 libraries run on your platform; it may be overkill for such a simple and small dataset though.
I would recommend to use assets for storing such data.
Related
I want to code a program, where the user can decide in what object to store values. The user can use Sets, Lists and Files(e.g. .txt, .xml). I want to write an interface, where in the end it doesn't matter which ("storing-")object the user chooses, so that I don't have to write the same methods for every decision.
How should I make an interface for that? Is the approach with the interface even suitable enough and what else do I need to do/consider?
import java.io.File;
public class StoreValues implements SaveInGeneral<SomeObject>{
//user's decision (LinkedList, Set or File)
if(decision == 1){
SaveInGeneral<SomeObject> obj = new LinkedList<>();
}
if(decision == 2){
SaveInGeneral<SomeObject> obj = new File();
}
//...
obj.add(someObject);
}
SaveInGeneral doesn't fit common naming strategies, which involve trying to name things with nouns. I'd call it Storage, for example.
The generics doesn't seem useful here - the whole point is to abstract away what the underlying storage mechanism is. So get rid of that.
Then, just define what, exactly, 'save an object' means. For example, a List can store items (.add(newItem)), but you can retrieve items by index (.get(5)), create an iterator (with .iterator()) so that you can for (String v : list) through it, and ask it its size (.size()), etc.
What kind of primitives are you looking for?
Presumably if all this does is store objects and nothing else, the one and only method you're looking for is .store(Object o).
The problem is, the task: "Store an arbitrary object on disk" just does not work. Most objects cannot be stored to disk at all. I strongly suggest you then limit the .store() method to only allow things you know how to store. You could go with Serializable, but that is a giant can of worms (serializable is extremely convoluted), or you need to involve third party libraries such as Jackson that attempt to marshall objects into e.g. JSON or XML.
You then need to think about the needs of your various targeted platforms (files, databases, lists, sets, etc), and cross that off vs. the needs of your code that needs to store things. Find the subset such that it consists solely of things which are feasible to implement in all targeted storage mechanisms, and which is sufficient for your code that needs a storage backend.
This can get complicated fast. For example, when reading out JSON produced by Jackson, you need to provide which class you want to read the JSON into, which is not a thing lists need (they know which object kind they stored already). Files, in turn, don't like it if you keep writing a tiny blob of data, then close the file, then open it again - the overhead means that this:
loop 1000 times and do: Open file, write ~50 bytes of data, close file.
is literally about 1000 times slower vs:
Open file, loop 1000 times and do: write 50 bytes of data. Then close file.
In other words, you'd have to update your API to involve an opening and a closing step, or accept that the file based storage backend is incredibly (1000x) slow.
Here is the most simple take - let's store only Strings because that's easy to send to anything from DBs to files to lists to network sockets, and let's just accept an inefficient algorithm for now:
public interface Storage {
public void store(String data) throws IOException;
}
some implementations:
public class ListBasedStorage implements Storage {
private final List<String> list = new ArrayList<String>();
public List<String> getBackingList() {
return list;
}
public void store(String data) {
list.add(data);
}
}
public class FileBasedStorage implements Storage {
private final Path target;
private static final Charset CHARSET = StandardCharsets.UTF_8;
public FileBasedStorage(Path p) {
this.target = target;
}
public void store(String data) throws IOException {
String line = data.replaceAll("\\R", " ") + "\n";
Files.write(target, line, CHARSET, StandardOpenOption.APPEND);
}
}
and to use this:
public static void main(String[] args) throws Exception {
Storage storage = new FileBasedStorage(Paths.get("mydata.txt"));
new MyApp().sayHello(storage);
}
public void sayHello(Storage storage) throws IOException {
storage.store("Hello!");
storage.store("World");
}
You can then start complicating matters by adding more data types or using e.g. JSON and a JSON marshaller like jackson to turn this data into stuff you can put in a file or db, adding retrieval code (where storage can also be asked how many entries are included, and e.g. asking for an iterator to go through the data, etcetera), and adding a 2-layered approach where you ask storage for a 'session', which must be safely closed using try (Session session = storage.start()) {}), in order to have fast file and DB writes (both files and DBs are transactional, in the sense that they work far better if you explicitly start, do stuff, and then save all you just did).
I am new to chronicle-map. I am trying to model an off heap map using chronicle-map where the key is a primitive short and the value is a primitive long array. The max size of the long array value is known for a given map. However I will have multiple maps of this kind each of which may have a different max size for the long array value. My question relates to the serialisation/deserialisation of the key and value.
From reading the documentation I understand that for the key I can use the value type ShortValue and reuse the instance of the implementation of that interface. Regarding the value I have found the page talking about DataAccess and SizedReader which gives an example for byte[] but I'm unsure how to adapt this to a long[]. One additional requirement I have is that I need to get and set values at arbitrary indices in the long array without paying the cost of a full serialisation/deserialisation of the entire value each time.
So my question is: how can I model the value type when constructing the map and what serialisation/deserialisation code do I need for a long[] array if the max size is known per map and I need to be able to read and write random indices without serialising/deserialising the entire value payload each time? Ideally the long[] would be encoded/decoded directly to/from off heap without undergoing an on heap intermediate conversion to a byte[] and also the chronicle-map code would not allocate at runtime. Thank you.
First, I recommend to use some kind of LongList interface abstraction instead of long[], it will make it easier to deal with size variability, provide alternative flyweight implementations, etc.
If you want to read/write just single elements in large lists, you should use advanced contexts API:
/** This method is entirely garbage-free, deserialization-free, and thread-safe. */
void putOneValue(ChronicleMap<ShortValue, LongList> map, ShortValue key, int index,
long element) {
if (index < 0) throw throw new IndexOutOfBoundsException(...);
try (ExternalMapQueryContext<ShortValue, LongList, ?> c = map.getContext(key)) {
c.writeLock().lock(); // (1)
MapEntry<ShortValue, LongList> entry = c.entry();
if (entry != null) {
Data<LongList> value = entry.value();
BytesStore valueBytes = (BytesStore) value.bytes(); // (2)
long valueBytesOffset = value.offset();
long valueBytesSize = value.size();
int valueListSize = (int) (valueBytesSize / Long.BYTES); // (3)
if (index >= valueListSize) throw new IndexOutOfBoundsException(...);
valueBytes.writeLong(valueBytesOffset + ((long) index) * Long.BYTES,
element);
((ChecksumEntry) entry).updateChecksum(); // (4)
} else {
// there is no entry for the given key
throw ...
}
}
}
Notes:
You must acquire writeLock() from the beginning, because otherwise readLock() is going to be acquired automatically when you call context.entry() method, and you won't be able to upgrade read lock to write lock later. Please read HashQueryContext javadoc carefully.
Data.bytes() formally returns RandomDataInput, but you could be sure (it's specified in Data.bytes() javadoc) that it's actually an instance of BytesStore (that's combination of RandomDataInput and RandomDataOutput).
Assuming proper SizedReader and SizedWriter (or DataAccess) are provided. Note that "bytes/element joint size" technique is used, the same as in the example given in SizedReader and SizedWriter doc section, PointListSizeMarshaller. You could base your LongListMarshaller on that example class.
This cast is specified, see ChecksumEntry javadoc and the section about checksums in the doc. If you have a purely in-memory (not persisted) Chronicle Map, or turned checksums off, this call could be omitted.
Implementation of single element read is similar.
Answering extra questions:
I've implemented a SizedReader+Writer. Do I need DataAccess or is SizedWriter fast enough for primitive arrays? I looked at the ByteArrayDataAccess but it's not clear how to port it for long arrays given that the internal HeapBytesStore is so specific to byte[]/ByteBuffers?
Usage of DataAccess instead of SizedWriter allows to make one less value data copy on Map.put(key, value). However, if in your use case putOneValue() (as in the example above) is the dominating type of query, it won't make much difference. If Map.put(key, value) (and replace(), etc., i. e. any "full value write" operations) are important, it is still possible to implement DataAccess for LongList. It will look like this:
class LongListDataAccess implements DataAccess<LongList>, Data<LongList>,
StatefulCopyable<LongListDataAccess> {
transient ByteStore cachedBytes;
transient boolean cachedBytesInitialized;
transient LongList list;
#Override public Data<LongList> getData(LongList list) {
this.list = list;
this.cachedBytesInitialized = false;
return this;
}
#Override public long size() {
return ((long) list.size()) * Long.BYTES;
}
#Override public void writeTo(RandomDataOutput target, long targetOffset) {
for (int i = 0; i < list.size(); i++) {
target.writeLong(targetOffset + ((long) i) * Long.BYTES), list.get(i));
}
}
...
}
For efficiency, the methods size() and writeTo() are key. But it's important to implement all other methods (which I didn't write here) correctly too. Read DataAccess, Data and StatefulCopyable javadocs very carefully, and also Understanding StatefulCopyable, DataAccess and SizedReader and Custom serialization checklist in the tutorial with great attention too.
Does the read/write locking mediate across multiple process reading and writing on same machine or just within a single process?
It's safe accross processes, note that the interface is called InterProcessReadWriteUpdateLock.
When storing objects, with a variable size not known in advance, as values will that cause fragmentation off heap and in the persisted file?
Storing value for a key once and not changing the size of the value (and not removing keys) after that won't cause external fragmentation. Changing size of the value or removing keys could cause external fragmentation. ChronicleMapBuilder.actualChunkSize() configuration allows to trade between external and internal fragmentation. The bigger the chunk, the less external fragmentation, but the more internal fragmentation. If your values are significantly bigger than page size (4 KB), you could set up absurdly big chunk size and still have internal fragmentation bound by the page size, because Chronicle Map is able to exploit lazy page allocation feature in Linux.
I am trying to figure out a way to serialize simple Java objects (ie all the fields are primitive types) compactly, without the big header that normally gets added on when you use writeExternal. It does not need to be super general, backwards compatible across versions, or anything like that, I just want it to work with ObjectOutputStreams (or something similar) and not add ~100 bytes to the size of each object I serialize.
More concretely, I have a class that has 3 members: a boolean flag and two longs. I should be able to represent this object in 17 bytes. Here is a simplified version of the code:
class Record implements Externalizable {
bool b;
long id;
long uid;
public void writeExternal(ObjectOutput out) throws IOException {
int size = 1 + 8 + 8; //I know, I know, but there's no sizeof
ByteBuffer buff = ByteBuffer.allocate(size);
if (b) {
buff.put((byte) 1);
} else {
buff.put((byte) 0);
}
buff.putLong(id);
buff.putLong(uid);
out.write(buff.array(), 0, size);
}
}
Elsewhere, these are stored by being passed into a method like the following:
public void store(Object value) throws IOException {
ObjectOutputStream out = getStream();
out.writeObject(value);
out.close();
}
After I store just one of these objects in a file this way, the file has a size of 128 bytes (and 256 for two of them, so it's not amortized). Looking at the file, it is clear that it is writing in a header similar to the one used in default serialization (which, for the record, uses about 376 bytes to store one of these). I can see that my writeExternal method is getting invoked (I put in some logging), so that isn't the problem. Is this just a fundamental limitation of the way ObjectOutputStream deserializes things? Do I need to work on raw DataOutputStreams to get the kind of compactness I want?
[EDIT: In case anyone is wondering, I ended up using DataOutputStreams directly, which turned out to be easier than I'd feared]
I'm considering using Java for a large project but I haven't been able to find anything that remotely represented structures in Java. I need to be able to convert network packets to structures/classes that can be used in the application.
I know that it is possible to use RandomAccessFile but this way is NOT acceptable. So I'm curious if it is possible to "cast" a set of bytes to a structure like I could do in C. If this is not possible then I cannot use Java.
So the question I'm asking is if it is possible to cast aligned data to a class without any extra effort beyond specifying the alignment and data types?
No. You cannot cast a array of bytes to a class object.
That being said, you can use a java.nio.Buffer and easily extract the fields you need to an object like this:
class Packet {
private final int type;
private final float data1;
private final short data2;
public Packet(byte[] bytes) {
ByteBuffer bb = ByteBuffer.wrap(bytes);
bb.order(ByteOrder.BIG_ENDIAN); // or LITTLE_ENDIAN
type = bb.getInt();
data1 = bb.getFloat();
data2 = bb.getShort();
}
}
You're basically asking whether you can use a C-specific solution to a problem in another language. The answer is, predictably, 'no'.
However, it is perfectly possible to construct a class that takes a set of bytes in its constructor and constructs an appropriate instance.
class Foo {
int someField;
String anotherField;
public Foo(byte[] bytes) {
someField = someFieldFromBytes(bytes);
anotherField = anotherFieldFromBytes(bytes);
etc.
}
}
You can ensure there is a one-to-one mapping of class instances to byte arrays. Add a toBytes() method to serialize an instance into bytes.
No, you cannot do that. Java simply doesn't have the same concepts as C.
You can create a class that behaves much like a struct:
public class Structure {
public int field1;
public String field2;
}
and you can have a constructor that takes an array or bytes or a DataInput to read the bytes:
public class Structure {
...
public Structure(byte[] data) {
this(new DataInputStream(new ByteArrayInputStream(data)));
}
public Structure(DataInput in) {
field1 = in.readInt();
field2 = in.readUTF();
}
}
then read bytes off the wire and pump them into Structures:
byte[] bytes = network.read();
DataInputStream stream = new DataInputStream(new ByteArrayInputStream(bytes));
Structure structure1 = new Structure(stream);
Structure structure2 = new Structure(stream);
...
It's not as concise as C but it's pretty close. Note that the DataInput interface cleanly removes any mucking around with endianness on your behalf, so that's definitely a benefit over C.
As Joshua says, serialization is the typical way to do these kinds of things. However you there are other binary protocols like MessagePack, ProtocolBuffers, and AvRO.
If you want to play with the bytecode structures, look at ASM and CGLIB; these are very common in Java applications.
There is nothing which matches your description.
The closest thing to a struct in Java is a simple class which holds values either accessible through it's fields or set/get methods.
The typical means to convert between Java class instances and on-the-wire representations is Java serialization which can be heavily customized as need be. It is what is used by Java's Remote Method Invocation API and works extremely well.
ByteBuffer.wrap(new byte[] {}).getDouble();
No, this is not possible. You're trying to use Java like C, which is bound to cause complications. Either learn to do things the Java way, or go back to C.
In this case, the Java way would probably involve DataInputStream and/or DataOutputStream.
You cannot cast array of bytes to instance of class.
But you can do much much more with java.
Java has internal, very strong and very flexible mechanism of serialization. This is what you need. You can read and write object to/from stream.
If both sides are written in java, there are no problem at all. If one of sides is not java you can customeze your serialization. Start from reading javadoc of java.util.Serializable.
For evaluating an algorithm I have to count how often the items of a byte-array are read/accessed. The byte-array is filled with the contents of a file and my algorithm can skip over many of the bytes in the array (like for example the Boyer–Moore string search algorithm). I have to find out how often an item is actually read. This byte-array is passed around to multiple methods and classes.
My ideas so far:
Increment a counter at each spot where the byte-array is read. This seems error-prone since there are many of these spots. Additionally I would have to remove this code afterwards such that it does not influence the runtime of my algorithm.
Use an ArrayList instead of a byte-array and overwrite its "get" method. Again, there are a lot of methods that would have to be modified and I suspect that there would be a performance loss.
Can I somehow use the Eclipse debug-mode? I see that I can specify a hit-count for watchpoints but it does not seem to be possible to output the hit count?!
Can maybe the Reflection API help me somehow?
Somewhat like 2), but in order to reduce the effort: Can I make a Java method accept an ArrayList where it wants an array such that it transparently calls the "get" method whenever an item is read?
There might be an out-of-the-box solution but I'd probably just wrap the byte array in a simple class.
public class ByteArrayWrapper {
private byte [] bytes;
private long readCount = 0;
public ByteArrayWrapper( byte [] bytes ) {
this.bytes = bytes;
}
public int getSize() { return bytes.length; }
public byte getByte( int index ) { readCount++; return bytes[ index ]; }
public long getReadCount() { return readCount; }
}
Something along these lines. Of course this does influence the running time but not very much. You could try it and time the difference, if you find it is significant, we'll have to find another way.
The most efficient way to do this is to add some code injection. However this is likely to be much more complicated to get right than writing a wrapper for your byte[] and passing this around. (tedious but at least the compiler will help you) If you use a wrapper which does basicly nothing (no counting) it will be almost as efficient as not using a wrapper and when you want counting you can use an implementation which does that.
You could use EHCache without too much overhead: implement an in-memory cache, keyed by array index. EHCache provides an API which will let you query hit rates "out of the box".
There's no way to do this automatically with a real byte[]. Using JVM TI might help here, but I suspect it's overkill.
Personally I'd write a simple wrapper around the byte[] with methods to read() and write() specific fields. Those methods can then track all accesses (either individually for each byte, or as a total or both).
Of course this would require the actual access to be modified, but if you're testing some algorithms that might not be such a big drawback. The same goes for performance: it will definitely suffer a bit, but the effect might be small enough not to worry about it.