I am able to add & view the key value pairs through my restful API method invocations.
But after adding the key value pairs, when I try to list/ view them using redis-cli console, it is not listing any values.
As you can notice, in the console, it is listing some junk values for the **keys *** command (after adding new key/value via browser), but when I try to retrieve the key, it is showing up as empty.
What could be the reason for this?
How to list the values properly in the console?
also attaching the restful api method definitions:
The value you're seeing in the output of KEYS * is the java-serialized string user.
The first two bytes \xac\xed (hex: 0xACED) is the STREAM_MAGIC constant.
The next two bytes \x00\x05 (hex: 0x0005) is the STREAM_VERSION, version of the serialization protocol.
The next byte, t is 0x74 = TC_STRING meaning is a string object.
Finally \x00\x04 is the length of the string.
This protocol is described in the Object Serialization Stream Protocol, in 6.4.2 Terminal Symbols and Constants
You probably want to review your code as to why are the strings being java-serialized before reaching Redis. Probably it is because of the h: that shows in the screenshot.
On the meantime, you can do GET "\xac\xed\x00\x05t\x00\x04user" to inspect the value of your user key.
Related
When I try to serialize an object to JSON using Jackson object mapper, it works perfectly.
{"id":1291741231928705024,"uuid":null,"email":"kannanrbk.r#gmail.com"}
Whereas, when I try to access it using spring rest controller. The long value numbers are rounded off, the last 3 digits.
I read existing questions in the stackoverflow, most of them suggest changing the datatype to string. But we used the Long value reference in most of the places, changing datatype will need some refactoring.
I did my initial analysis:
We are using Jackson ObjectMapper
From Spring, it indirectly calls MappingJackson2HttpMessageConverter
This problem might be somewhere around the JSONParser, where it treats any number as a double (15 digits) and after that, it's rounded off
Is there any way to fix this issue?
Is there any way to fix this issue?
There is no problem with Jackson/Java/Spring Boot, but with JavaScript/Browser.
Trying to reproduce the issue I serialized the same object and got this using curl:
$ curl localhost:8080
{"id":1291741231928705024,"uuid":null,"email":"kannanrbk.r#gmail.com"}
Here the number is correctly serialized.
The same json viewed in Firefox does truncate:
However the "Raw Data" tab displays the number correctly:
.
In JavaScript 1291741231928705024 is not safe integer (see Number.isSafeInteger()):
Number.isSafeInteger(1291741231928705024);
false
The number is greater than 2^53 - 1 so it gets rounded. Even more confusing situations are possible in JavaScript:
> 1291741231928705024 === 1291741231928705022
true
Possible solution
First of all check your client against this kind of problems. If it can safely deserialize such numbers then you're safe.
Or you can serialize longs as Strings (as you mentioned in the question), this is what Twitter proposes in its Twitter IDs (snowflake) article:
To allow Javascript and JSON parsers to read the IDs, Twitter objects include a string version of any ID when responding with JSON. Status, User, Direct Message, Saved Search and other IDs in the Twitter API are therefore returned as both an integer and a string in JSON responses.
Try with bigInt for your primary key
I have to read a file and store the values and then later do a lookup.
For e.g., the file will look as follows:
Gryffindor = 5
Gryffindor.Name.Harry = 10
Gryffindor.Name.Harry.Cloak.Black = 15
and so on...
I need to store these (I was thinking of a map). Later, I need to process every character and lookup this map to assign them points. Suppose I encounter Harry, I know that he's from Gryffindor and he's wearing a blue cloak. I will have to lookup this map (or whatever object I use) as
Gryffindor.Name.Harry.Cloak.Blue
which should return me nothing. I then need to fall back to just the name and lookup
Gryffindor.Name.Harry
that should return me a 10.
Similarly, if I lookup for Ron, (suppose he's wearing black),
Gryffindor.Name.Ron.Cloak.Black
should return nothing, fall back to
Gryffindor.Name.Ron
again nothing, fall back to
Gryffindor
which should return 5.
What will be an elegant way to store and read this data? I was thinking of using a map for storing the key value pairs and then a switch case to read them back. How would you do it?
Java has a built-in Properties class that implements Map and can read and write the data format you describe (see that class's load() and store() methods).
There's nothing in there to implement your "fall back to a higher-level key" feature, so you'll need to write a method that looks in the Properties instance for data under the desired key, and keeps trying successively shorter versions of the same key if it finds nothing.
I am using hazelcast -2.5 in a cluster. I have a map (key: String, value: ArrayList of user defined objects). I am able to put/remove fine in most places but in one specific part of my code, the put operation fails silently (the key string used for the put operation is unique and the ArrayList is not empty either). No exceptions are thrown. In case there was a lock involved, I even tried tryPut and that call gave me a true return value. Right after the put operation, I tried printing out the keySet for the map but cannot see the key I just inserted - the size of the map has not changed either (yet the tryPut gave me a true return value and I'm reasonably sure the string I am using for the key is unique - and I am hoping the binary form for the key is unique as well). If the binary form for my key is not unique, I am assuming that the tryPut should return a false return value or at least replace the previously added key/value with the new key/value pair (unless I misinterpreted the docs).
boolean putVal = testMap.tryPut(this.testObj.UUID, testEntity, timeout, TimeUnit.MILLISECONDS); //timeout is 2000L or 2 seconds in this case
Any thoughts on troubleshooting this or figuring out if the binary form for my key is causing the issue will be appreciated.
Thanks
Try to do a get. And see if there is any value associated with that key. If not, the put should be successful.
I am currently trying to perform some regex on the result of a DatagramPacket.getData() call.
Implemented as String myString = new String(thepkt.getData()):
But weirdly, java is dropping the end quotation that it uses to encapsulate all data(see linked image below).
When I click the field in the variable inspector during a debug session and don't change anything, when I click off the variable field it corrects itself again without me changing anything. It even highlights the variable inspection field in yellow to signal change.
Its values are also displaying like it is still a byte array rather than a String object
http://i.imgur.com/8ZItsZI.png
It's throwing off my regex and I can't see anything that would cause it. It's a client server simulation and on the client side, the getData returns the data no problem.
I got it working by using the solution provided in:
https://stackoverflow.com/a/8557165/1700855
But I still don't understand how not specifying the length of the packet to the String constructor would cause it to drop the systematic end double quotes. Can anyone provide an explanation as I really like to understand solutions to my issues before moving on :)
The problem is that you didn't read the spec for DatagramPacket.getData:
Returns the data buffer. The data received or the data to be sent
starts from the offset in the buffer, and runs for length long.
So, to be correct, you should use
new String(thepkt.getData(), thepkt.getOffset(), thepht.getLength())
Or, to not use the default charset:
new String(thepkt.getData(), thepkt.getOffset(), thepht.getLength(), someCharset)
Okay, so I have been reading a lot about Hadoop and MapReduce, and maybe it’s because I’m not as familiar with iterators as most, but I have a question I can’t seem to find a direct answer too. Basically, as I understand it, the map function is executed in parallel by many machine and/or cores. Thus, whatever you are working on must not depend on prior code being executed for the program to make any kind of speed gains. This works perfectly for me, but what I’m doing requires me to test information in small batches. Basically I need to send batches of lines in a .csv as arrays of 32, 64, 128 or whatever lines each. Like lines 0 – 127 go to core1’s execution of the map function, lines 128 – 255 lines go to core2’s, etc., .etc . Also I need to have the contents of each batch available as a whole inside the function, as if I had passed it an array. I read a little about how the new java API allows for something called push and pull, and that this allows things to be sent in batches, but I couldn’t find any example code. I dunno, I’m going to continue researching, and I’ll post anything I find, but if anyone knows, could they please post in this thread. I would really appreciate any help I might receive.
edit
If you could simply ensure that the chunks of the .csv are sent in sequence you could preform it this way. I guess this also assumes that there are globals in mapreduce.
//** concept not code **//
GLOBAL_COUNTER = 0;
GLOBAL_ARRAY = NEW ARRAY();
map()
{
GLOBAL_ARRAY[GLOBAL_COUNTER] = ITERATOR_VALUE;
GLOBAL_COUNTER++;
if(GLOBAL_COUNTER == 127)
{
//EXECUTE TEST WITH AN ARRAY OF 128 VALUES FOR COMPARISON
GLOBAL_COUNTER = 0;
}
}
If you're trying to get a chunk of lines from your CSV file into the mapper, you might consider writing your own InputFormat/RecordReader and potentially your own WritableComparable object. With the custom InputFormat/RecordReader you'll be able to specify how objects are created and passed to the mapper based on the input you receive.
If the mapper is doing what you want, but you need these chunks of lines sent to the reducer, make the output key for the mapper the same for each line you want in the same reduce function.
The default TextInputFormat will give input to your mapper like this (the keys/offsets in this example are just random numbers):
0 Hello World
123 My name is Sam
456 Foo bar bar foo
Each of those lines will be read into your mapper as a key,value pair. Just modify the key to be the same for each line you need and write it to the output:
0 Hello World
0 My name is Sam
1 Foo bar bar foo
The first time the reduce function is read, it will receive a key,value pair with the key being "0" and the value being an Iterable object containing "Hello World" and "My name is Sam". You'll be able to access both of these values in the same reduce method call by using the Iterable object.
Here is some pseudo code:
int count = 0
map (key, value) {
int newKey = count/2
context.write(newKey,value)
count++
}
reduce (key, values) {
for value in values
// Do something to each line
}
Hope that helps. :)
If the end goal of what you want is to force certain sets to go to certain machines for processing you want to look into writing your own Partitioner. Otherwise, Hadoop will split data automatically for you depending on the number of reducers.
I suggest reading the tutorial on the Hadoop site to get a better understanding of M/R.
If you simply want to send N lines of input to a single mapper, you can user the NLineInputFormat class. You could then do the line parsing (splitting on commas, etc) in the mapper.
If you want to have access to the lines before and after the line the mapper is currently processing, you may have to write your own input format. Subclassing FileInputFormat is usually a good place to start. You could create an InputFormat that reads N lines, concatenates them, and sends them as one block to a mapper, which then splits the input into N lines again and begins processing.
As far as globals in Hadoop go, you can specify some custom parameters when you create the job configuration, but as far as I know, you cannot change them in a worker and expect the change to propagate throughout the cluster. To set a job parameter that will be visible to workers, do the following where you are creating the job:
job.getConfiguration().set(Constants.SOME_PARAM, "my value");
Then to read the parameters value in the mapper or reducer,
public void map(Text key, Text value, Context context) {
Configuration conf = context.getConfiguration();
String someParam = conf.get(Constants.SOME_PARAM);
// use someParam in processing input
}
Hadoop has support for basic types such as int, long, string, bool, etc to be used in parameters.