how to create xbgoost's Dmatrix from float[] data in java? - java

Exception in thread "main" java.lang.RuntimeException: ml.dmlc.xgboost4j.java.XGBoostError: [16:04:11] /workspace/src/metric/multiclass_metric.cu:171: Check failed: preds.Size() == 0 (195 vs. 0) :
Stack trace:
[bt] (0) /tmp/libxgboost4j8070440599892179504.so(+0x33c38d) [0x7fb26893c38d]
[bt] (1) /tmp/libxgboost4j8070440599892179504.so(xgboost::metric::EvalMClassBase<xgboost::metric::EvalMultiLogLoss>::Eval(xgboost::HostDeviceVector<float> const&, xgboost::MetaInfo const&)+0xfc) [0x7fb26893fb7c]
[bt] (2) /tmp/libxgboost4j8070440599892179504.so(xgboost::LearnerImpl::EvalOneIter(int, std::vector<std::shared_ptr<xgboost::DMatrix>, std::allocator<std::shared_ptr<xgboost::DMatrix> > > const&, std::vector<std::string, std::allocator<std::string> > const&)+0x45c) [0x7fb2688e3a6c]
[bt] (3) /tmp/libxgboost4j8070440599892179504.so(XGBoosterEvalOneIter+0x2f5) [0x7fb268731ed5]
[bt] (4) /tmp/libxgboost4j8070440599892179504.so(Java_ml_dmlc_xgboost4j_java_XGBoostJNI_XGBoosterEvalOneIter+0x2cd) [0x7fb26872336d]
[bt] (5) [0x7fb288779a10]
at FinalDataprocessing.main(FinalDataprocessing.java:244)
Caused by: ml.dmlc.xgboost4j.java.XGBoostError: [16:04:11] /workspace/src/metric/multiclass_metric.cu:171: Check failed: preds.Size() == 0 (195 vs. 0) :
Stack trace:
[bt] (0) /tmp/libxgboost4j8070440599892179504.so(+0x33c38d) [0x7fb26893c38d]
[bt] (1) /tmp/libxgboost4j8070440599892179504.so(xgboost::metric::EvalMClassBase<xgboost::metric::EvalMultiLogLoss>::Eval(xgboost::HostDeviceVector<float> const&, xgboost::MetaInfo const&)+0xfc) [0x7fb26893fb7c]
[bt] (2) /tmp/libxgboost4j8070440599892179504.so(xgboost::LearnerImpl::EvalOneIter(int, std::vector<std::shared_ptr<xgboost::DMatrix>, std::allocator<std::shared_ptr<xgboost::DMatrix> > > const&, std::vector<std::string, std::allocator<std::string> > const&)+0x45c) [0x7fb2688e3a6c]
[bt] (3) /tmp/libxgboost4j8070440599892179504.so(XGBoosterEvalOneIter+0x2f5) [0x7fb268731ed5]
[bt] (4) /tmp/libxgboost4j8070440599892179504.so(Java_ml_dmlc_xgboost4j_java_XGBoostJNI_XGBoosterEvalOneIter+0x2cd) [0x7fb26872336d]
[bt] (5) [0x7fb288779a10]
at ml.dmlc.xgboost4j.java.XGBoostJNI.checkCall(XGBoostJNI.java:48)
at ml.dmlc.xgboost4j.java.Booster.evalSet(Booster.java:218)
at ml.dmlc.xgboost4j.java.Booster.evalSet(Booster.java:235)
at ml.dmlc.xgboost4j.java.XGBoost.trainAndSaveCheckpoint(XGBoost.java:230)
at ml.dmlc.xgboost4j.java.XGBoost.train(XGBoost.java:304)
at ml.dmlc.xgboost4j.java.XGBoost.train(XGBoost.java:127)
at ml.dmlc.xgboost4j.java.XGBoost.train(XGBoost.java:98)
at FinalDataprocessing.main(FinalDataprocessing.java:241)
you can take look at code here: https://github.com/harihkim/xgb_java/blob/master/src/main/java/XgbModel.java .
I tried to create a DMatrix to hold training freatures from tablesaw(java library)'s Table.
DMatrix requires data to be of type flaot[]. It gives error when using that DMatrix to train xgboost.
error: Check failed: preds.Size() == 0 (195 vs. 0)
how to fix this?

Related

Heap Corruption In C When Using DLL With JNA

I am using C language Native API callbacks with DLL files. When we are calling callback first time everything is working fine but on second call I am getting heap corruption error and JVM is getting crashed.
In the native code the memory allocated in first call is being released and then is being used in second call again and during memory allocation in second call JVM is being crashed. But on the same place when in second call new memory pointer is used rather than the one which was used in previous call I am not getting this heap corruption error.
As this callback is called many times I can not keep on allocating new space every time. In below logs I am getting error as INVALID_POINTER_READ.
I am not able to understand what is the reason behind it and how this can be fixed. When same DLL is used with JNA it's working fine.
Java/JNA Code:
Setting Hook:
final PropertyCallBack callback = new PropertyCallBack();
final int setHookStatus = callback.setHook();
private static CALLBACK callback;
public int setHook() {
if (callback != null) {
return 0;
}
synchronized (this) {
if (callback == null) {
callback = new CALLBACK();
return callback.setHook();
}
}
return 0;
}
Callback Method Called From Native:
#Override
public int PropertyHook(final DESTINATION dest, final BACSTAC_READ_INFO.ByReference info) {
final PROPERTY_CONTENTS.ByReference content = new PROPERTY_CONTENTS.ByReference();
final BUFFER.ByReference buffer = new BUFFER.ByReference();
// Memory assign
final int bufferSize = 1048;
buffer.pBuffer = new Memory(bufferSize);
buffer.nBufferSize = bufferSize;
content.tag = "INVALID";
content.buffer = buffer;
content.nElements = 0;
Pointer dev = NativeLibrary.INSTANCE.Call_1();
Pointer obj = null;
if (dev != null) {
obj = NativeLibrary.INSTANCE.call_2(dev, info.objectID);
}
final int readDbStatus = NativeLibrary.INSTANCE.call_3(obj, info.prop, info.index, content, null);
final int responseStatus = NativeLibrary.INSTANCE.call_4(dest, info, content);
return 0;
}
When I analyzed heap dump with windbg I am getting below details:
This dump file has an exception of interest stored in it.
The stored exception information can be accessed via .ecxr.
(6201c.5ef10): Access violation - code c0000005 (first/second chance not available)
For analysis of this file, run !analyze -v
ntdll!NtWaitForMultipleObjects+0x14:
00007ffa`46deb4f4 c3 ret
0:026> !analyze -v
*******************************************************************************
* *
* Exception Analysis *
* *
*******************************************************************************
*** WARNING: Unable to verify checksum for srv.dll
DEBUG_FLR_EXCEPTION_CODE(c0000374) and the ".exr -1" ExceptionCode(c0000005) don't match
KEY_VALUES_STRING: 1
Key : AV.Fault
Value: Read
Key : Timeline.Process.Start.DeltaSec
Value: 46
PROCESSES_ANALYSIS: 1
SERVICE_ANALYSIS: 1
STACKHASH_ANALYSIS: 1
TIMELINE_ANALYSIS: 1
Timeline: !analyze.Start
Name: <blank>
Time: 2019-12-02T11:13:41.439Z
Diff: 3429439 mSec
Timeline: Dump.Current
Name: <blank>
Time: 2019-12-02T10:16:32.0Z
Diff: 0 mSec
Timeline: Process.Start
Name: <blank>
Time: 2019-12-02T10:15:46.0Z
Diff: 46000 mSec
DUMP_CLASS: 2
DUMP_QUALIFIER: 400
CONTEXT: (.ecxr)
rax=0000000000030000 rbx=000000002b200000 rcx=0000000000000303
rdx=0000000000000003 rsi=01fda8c00000ed00 rdi=000000002b223ef0
rip=00007ffa46d6cb7a rsp=000000002b8ff500 rbp=0000000000000008
r8=0000000000000028 r9=0000000000000030 r10=00000000014da2d0
r11=00000000014e2ef0 r12=0000000000000001 r13=0000000000000003
r14=000000002b223ee0 r15=000000000600c1ba
iopl=0 nv up ei pl zr na po nc
cs=0033 ss=002b ds=002b es=002b fs=0053 gs=002b efl=00010246
ntdll!RtlpAllocateHeap+0xdaa:
00007ffa`46d6cb7a 498b07 mov rax,qword ptr [r15] ds:00000000`0600c1ba=????????????????
Resetting default scope
FAULTING_IP:
ntdll!RtlpAllocateHeap+daa
00007ffa`46d6cb7a 498b07 mov rax,qword ptr [r15]
EXCEPTION_RECORD: (.exr -1)
ExceptionAddress: 00007ffa46d6cb7a (ntdll!RtlpAllocateHeap+0x0000000000000daa)
ExceptionCode: c0000005 (Access violation)
ExceptionFlags: 00000000
NumberParameters: 2
Parameter[0]: 0000000000000000
Parameter[1]: 000000000600c1ba
Attempt to read from address 000000000600c1ba
DEFAULT_BUCKET_ID: HEAP_CORRUPTION
PROCESS_NAME: javaw.exe
FOLLOWUP_IP:
ntdll!RtlpAllocateHeap+daa
00007ffa`46d6cb7a 498b07 mov rax,qword ptr [r15]
READ_ADDRESS: 000000000600c1ba
ERROR_CODE: (NTSTATUS) 0xc0000005 - The instruction at 0x%p referenced memory at 0x%p. The memory could not be %s.
EXCEPTION_CODE: (NTSTATUS) 0xc0000374 - A heap has been corrupted.
EXCEPTION_CODE_STR: c0000005
EXCEPTION_PARAMETER1: 0000000000000000
EXCEPTION_PARAMETER2: 000000000600c1ba
WATSON_BKT_PROCSTAMP: 5d1dea24
WATSON_BKT_PROCVER: 8.0.2210.11
PROCESS_VER_PRODUCT: Java(TM) Platform SE 8
WATSON_BKT_MODULE: ntdll.dll
WATSON_BKT_MODSTAMP: 7f828745
WATSON_BKT_MODOFFSET: 1cb7a
WATSON_BKT_MODVER: 10.0.17134.799
MODULE_VER_PRODUCT: Microsoft® Windows® Operating System
BUILD_VERSION_STRING: 17134.1.amd64fre.rs4_release.180410-1804
MODLIST_WITH_TSCHKSUM_HASH: f06ad8a6a7f7267c783c08e3a62df4696020d52f
MODLIST_SHA1_HASH: cdafa8057ac19b1a3608c439ebbfa992407212d6
NTGLOBALFLAG: 0
PROCESS_BAM_CURRENT_THROTTLED: 0
PROCESS_BAM_PREVIOUS_THROTTLED: 0
APPLICATION_VERIFIER_FLAGS: 0
DUMP_FLAGS: 94
DUMP_TYPE: 1
ANALYSIS_SESSION_HOST: MD2E86EC
ANALYSIS_SESSION_TIME: 12-02-2019 16:43:41.0439
ANALYSIS_VERSION: 10.0.18362.1 x86fre
THREAD_ATTRIBUTES:
ADDITIONAL_DEBUG_TEXT: Enable Pageheap/AutoVerifer ; Followup set based on attribute [Is_ChosenCrashFollowupThread] from Frame:[0] on thread:[PSEUDO_THREAD]
FAULTING_THREAD: 0005ef10
THREAD_SHA1_HASH_MOD_FUNC: 5d531e271dfb1ef7af4984c7ee0dd671c07337f5
THREAD_SHA1_HASH_MOD_FUNC_OFFSET: d858fa5fb04738fbbbbb9e4df89e26d53dc74794
OS_LOCALE: ENU
BUGCHECK_STR: APPLICATION_FAULT_INVALID_POINTER_READ_HEAP_CORRUPTION
PRIMARY_PROBLEM_CLASS: APPLICATION_FAULT
PROBLEM_CLASSES:
ID: [0n262]
Type: [HEAP_CORRUPTION]
Class: Primary
Scope: DEFAULT_BUCKET_ID (Failure Bucket ID prefix)
BUCKET_ID
Name: Add
Data: Omit
PID: [0x6201c]
TID: [0x5ef10]
Frame: [0] : ntdll!RtlpAllocateHeap
ID: [0n262]
Type: [HEAP_CORRUPTION]
Class: Primary
Scope: BUCKET_ID
Name: Add
Data: Omit
PID: [0x6201c]
TID: [0x5ef10]
Frame: [0] : ntdll!RtlpAllocateHeap
ID: [0n313]
Type: [#ACCESS_VIOLATION]
Class: Addendum
Scope: BUCKET_ID
Name: Omit
Data: Omit
PID: [Unspecified]
TID: [0x5ef10]
Frame: [0] : ntdll!RtlpAllocateHeap
ID: [0n285]
Type: [INVALID_POINTER_READ]
Class: Primary
Scope: BUCKET_ID
Name: Add
Data: Omit
PID: [Unspecified]
TID: [0x5ef10]
Frame: [0] : ntdll!RtlpAllocateHeap
LAST_CONTROL_TRANSFER: from 00007ffa46d69725 to 00007ffa46d6cb7a
STACK_TEXT:
00000000`00000000 00000000`00000000 heap_corruption!javaw.exe+0x0
THREAD_SHA1_HASH_MOD: ca4e26064d24ef7512d2e94de5a93c38dbe82fe9
SYMBOL_STACK_INDEX: 0
SYMBOL_NAME: heap_corruption!javaw.exe
FOLLOWUP_NAME: MachineOwner
MODULE_NAME: heap_corruption
IMAGE_NAME: heap_corruption
DEBUG_FLR_IMAGE_TIMESTAMP: 0
STACK_COMMAND: ** Pseudo Context ** ManagedPseudo ** Value: a3807e8 ** ; kb
FAILURE_BUCKET_ID: HEAP_CORRUPTION_c0000005_heap_corruption!javaw.exe
BUCKET_ID: APPLICATION_FAULT_INVALID_POINTER_READ_HEAP_CORRUPTION_heap_corruption!javaw.exe
FAILURE_EXCEPTION_CODE: c0000005
FAILURE_IMAGE_NAME: heap_corruption
BUCKET_ID_IMAGE_STR: heap_corruption
FAILURE_MODULE_NAME: heap_corruption
BUCKET_ID_MODULE_STR: heap_corruption
FAILURE_FUNCTION_NAME: javaw.exe
BUCKET_ID_FUNCTION_STR: javaw.exe
BUCKET_ID_OFFSET: 0
BUCKET_ID_MODTIMEDATESTAMP: 0
BUCKET_ID_MODCHECKSUM: 0
BUCKET_ID_MODVER_STR: 0.0.0.0
BUCKET_ID_PREFIX_STR: APPLICATION_FAULT_INVALID_POINTER_READ_
FAILURE_PROBLEM_CLASS: APPLICATION_FAULT
FAILURE_SYMBOL_NAME: heap_corruption!javaw.exe
WATSON_STAGEONE_URL: http://watson.microsoft.com/StageOne/javaw.exe/8.0.2210.11/5d1dea24/ntdll.dll/10.0.17134.799/7f828745/c0000005/0001cb7a.htm?Retriage=1
TARGET_TIME: 2019-12-02T10:16:32.000Z
OSBUILD: 17134
OSSERVICEPACK: 753
SERVICEPACK_NUMBER: 0
OS_REVISION: 0
SUITE_MASK: 256
PRODUCT_TYPE: 1
OSPLATFORM_TYPE: x64
OSNAME: Windows 10
OSEDITION: Windows 10 WinNt SingleUserTS
USER_LCID: 0
OSBUILD_TIMESTAMP: unknown_date
BUILDDATESTAMP_STR: 180410-1804
BUILDLAB_STR: rs4_release
BUILDOSVER_STR: 10.0.17134.1.amd64fre.rs4_release.180410-1804
ANALYSIS_SESSION_ELAPSED_TIME: 307a
ANALYSIS_SOURCE: UM
FAILURE_ID_HASH_STRING: um:heap_corruption_c0000005_heap_corruption!javaw.exe
FAILURE_ID_HASH: {ddc2b378-b1e1-2aec-adc8-f11b7a5773a9}
Any help in fix/debug will be highly appreciated.
I got the solution of to prevent above heap corruption by calling NativeLibrary methods of PropertyHook in another thread. Somehow by calling NativeLibrary methods in different thread heap is not getting corrupted and sub-sequently JVM is not being crashed.

How to fix mismatch processor architecture, unexpected e_machine, due to a .so file

I have a Xamarin android project and am currently using Visual Studio 2015.
Inside of my MainActivity.cs, I have the following code:
Com.Alk.Sdk.SharedLibraryLoader.LoadLibrary("alksdk", this);
which then goes into:
// Metadata.xml XPath method reference: path="/api/package[#name='com.alk.sdk']/class[#name='SharedLibraryLoader']/method[#name='loadLibrary' and count(parameter)=2 and parameter[1][#type='java.lang.String'] and parameter[2][#type='android.content.Context']]"
[Register ("loadLibrary", "(Ljava/lang/String;Landroid/content/Context;)Z", "")]
public static unsafe bool LoadLibrary (string p0, global::Android.Content.Context p1)
{
if (id_loadLibrary_Ljava_lang_String_Landroid_content_Context_ == IntPtr.Zero)
id_loadLibrary_Ljava_lang_String_Landroid_content_Context_ = JNIEnv.GetStaticMethodID (class_ref, "loadLibrary", "(Ljava/lang/String;Landroid/content/Context;)Z");
IntPtr native_p0 = JNIEnv.NewString (p0);
try {
JValue* __args = stackalloc JValue [2];
__args [0] = new JValue (native_p0);
__args [1] = new JValue (p1);
bool __ret = JNIEnv.CallStaticBooleanMethod (class_ref, id_loadLibrary_Ljava_lang_String_Landroid_content_Context_, __args);
return __ret;
} finally {
JNIEnv.DeleteLocalRef (native_p0);
}
}
The problem I am having is that when it calls the JNIEnv.CallStaticBooleanMethod, it throws this exception:
[ERROR] FATAL UNHANDLED EXCEPTION: Java.Lang.UnsatisfiedLinkError: dlopen failed: "/data/data/com.pai.rp/app_lib/libalksdk.so" has unexpected e_machine: 3
--- End of managed Java.Lang.UnsatisfiedLinkError stack trace ---
java.lang.UnsatisfiedLinkError: dlopen failed: "/data/data/com.pai.rp/app_lib/libalksdk.so" has unexpected e_machine: 3
at java.lang.Runtime.load0(Runtime.java:908)
at java.lang.System.load(System.java:1542)
at com.alk.sdk.SharedLibraryLoader.loadLibrary(SharedLibraryLoader.java:44)
at com.pai.rp.MainActivity.n_onCreate(Native Method)
at com.pai.rp.MainActivity.onCreate(MainActivity.java:30)
at android.app.Activity.performCreate(Activity.java:6973)
at android.app.Instrumentation.callActivityOnCreate(Instrumentation.java:1126)
at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2946)
at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:3064)
at android.app.ActivityThread.-wrap14(ActivityThread.java)
at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1659)
at android.os.Handler.dispatchMessage(Handler.java:102)
at android.os.Looper.loop(Looper.java:154)
at android.app.ActivityThread.main(ActivityThread.java:6823)
at java.lang.reflect.Method.invoke(Native Method)
at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:1557)
at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:1445)
From what I've been able to research I've found that
e_machine: 3 indicates the expected arch is Intel 80386 (ELF Headers).
When using a Google Nexus 10 emulator it seems to work, however when
I use the Galaxy Tab E (ARM) through USB debugging, it crashes and gives me
this error.
So the question is, how do I correct this?

How to increase Dataflow read parallelism from Cassandra

I am trying to export a lot of data (2 TB, 30kkk rows) from Cassandra to BigQuery. All my infrastructure is on GCP. My Cassandra cluster have 4 nodes (4 vCPUs, 26 GB memory, 2000 GB PD (HDD) each). There is one seed node in the cluster. I need to transform my data before writing to BQ, so I am using Dataflow. Worker type is n1-highmem-2. Workers and Cassandra instances are at the same zone europe-west1-c. My limits for Cassandra:
Part of my pipeline code responsible for reading transform is located here.
Autoscaling
The problem is that when I don't set --numWorkers, the autoscaling set number of workers in such manner (2 workers average):
Load balancing
When I set --numWorkers=15 the rate of reading doesn't increase and only 2 workers communicate with Cassandra (I can tell it from iftop and only these workers have CPU load ~60%).
At the same time Cassandra nodes don't have a lot of load (CPU usage 20-30%). Network and disk usage of the seed node is about 2 times higher than others, but not too high, I think:
And for the not seed node here:
Pipeline launch warnings
I have some warnings when pipeline is launching:
WARNING: Size estimation of the source failed:
org.apache.beam.sdk.io.cassandra.CassandraIO$CassandraSource#7569ea63
com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: /10.132.9.101:9042 (com.datastax.driver.core.exceptions.TransportException: [/10.132.9.101:9042] Cannot connect), /10.132.9.102:9042 (com.datastax.driver.core.exceptions.TransportException: [/10.132.9.102:9042] Cannot connect), /10.132.9.103:9042 (com.datastax.driver.core.exceptions.TransportException: [/10.132.9.103:9042] Cannot connect), /10.132.9.104:9042 [only showing errors of first 3 hosts, use getErrors() for more details])
My Cassandra cluster is in GCE local network and it seams that some queries are made from my local machine and cannot reach the cluster (I am launching pipeline with Dataflow Eclipse plugin as described here). These queries are about size estimation of tables. Can I specify size estimation by hand or launch pipline from GCE instance? Or can I ignore these warnings? Does it have effect on rate of read?
I'v tried to launch pipeline from GCE VM. There is no more problem with connectivity. I don't have varchar columns in my tables but I get such warnings (no codec in datastax driver [varchar <-> java.lang.Long]). :
WARNING: Can't estimate the size
com.datastax.driver.core.exceptions.CodecNotFoundException: Codec not found for requested operation: [varchar <-> java.lang.Long]
at com.datastax.driver.core.CodecRegistry.notFound(CodecRegistry.java:741)
at com.datastax.driver.core.CodecRegistry.createCodec(CodecRegistry.java:588)
at com.datastax.driver.core.CodecRegistry.access$500(CodecRegistry.java:137)
at com.datastax.driver.core.CodecRegistry$TypeCodecCacheLoader.load(CodecRegistry.java:246)
at com.datastax.driver.core.CodecRegistry$TypeCodecCacheLoader.load(CodecRegistry.java:232)
at com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3628)
at com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2336)
at com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2295)
at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2208)
at com.google.common.cache.LocalCache.get(LocalCache.java:4053)
at com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:4057)
at com.google.common.cache.LocalCache$LocalLoadingCache.get(LocalCache.java:4986)
at com.datastax.driver.core.CodecRegistry.lookupCodec(CodecRegistry.java:522)
at com.datastax.driver.core.CodecRegistry.codecFor(CodecRegistry.java:485)
at com.datastax.driver.core.CodecRegistry.codecFor(CodecRegistry.java:467)
at com.datastax.driver.core.AbstractGettableByIndexData.codecFor(AbstractGettableByIndexData.java:69)
at com.datastax.driver.core.AbstractGettableByIndexData.getLong(AbstractGettableByIndexData.java:152)
at com.datastax.driver.core.AbstractGettableData.getLong(AbstractGettableData.java:26)
at com.datastax.driver.core.AbstractGettableData.getLong(AbstractGettableData.java:95)
at org.apache.beam.sdk.io.cassandra.CassandraServiceImpl.getTokenRanges(CassandraServiceImpl.java:279)
at org.apache.beam.sdk.io.cassandra.CassandraServiceImpl.getEstimatedSizeBytes(CassandraServiceImpl.java:135)
at org.apache.beam.sdk.io.cassandra.CassandraIO$CassandraSource.getEstimatedSizeBytes(CassandraIO.java:308)
at org.apache.beam.runners.direct.BoundedReadEvaluatorFactory$BoundedReadEvaluator.startDynamicSplitThread(BoundedReadEvaluatorFactory.java:166)
at org.apache.beam.runners.direct.BoundedReadEvaluatorFactory$BoundedReadEvaluator.processElement(BoundedReadEvaluatorFactory.java:142)
at org.apache.beam.runners.direct.TransformExecutor.processElements(TransformExecutor.java:146)
at org.apache.beam.runners.direct.TransformExecutor.run(TransformExecutor.java:110)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Pipeline read code
// Read data from Cassandra table
PCollection<Model> pcollection = p.apply(CassandraIO.<Model>read()
.withHosts(Arrays.asList("10.10.10.101", "10.10.10.102", "10.10.10.103", "10.10.10.104")).withPort(9042)
.withKeyspace(keyspaceName).withTable(tableName)
.withEntity(Model.class).withCoder(SerializableCoder.of(Model.class))
.withConsistencyLevel(CASSA_CONSISTENCY_LEVEL));
// Transform pcollection to KV PCollection by rowName
PCollection<KV<Long, Model>> pcollection_by_rowName = pcollection
.apply(ParDo.of(new DoFn<Model, KV<Long, Model>>() {
#ProcessElement
public void processElement(ProcessContext c) {
c.output(KV.of(c.element().rowName, c.element()));
}
}));
Number of splits (Stackdriver log)
W Number of splits is less than 0 (0), fallback to 1
I Number of splits is 1
W Number of splits is less than 0 (0), fallback to 1
I Number of splits is 1
W Number of splits is less than 0 (0), fallback to 1
I Number of splits is 1
What I'v tried
No effect:
set read consistency level to ONE
nodetool setstreamthroughput 1000, nodetool setinterdcstreamthroughput 1000
increase Cassandra read concurrency (in cassandra.yaml): concurrent_reads: 32
setting different number of workers 1-40.
Some effect:
1. I'v set numSplits = 10 as #jkff proposed. Now I can see in logs:
I Murmur3Partitioner detected, splitting
W Can't estimate the size
W Can't estimate the size
W Number of splits is less than 0 (0), fallback to 10
I Number of splits is 10
W Number of splits is less than 0 (0), fallback to 10
I Number of splits is 10
I Splitting source org.apache.beam.sdk.io.cassandra.CassandraIO$CassandraSource#6d83ee93 produced 10 bundles with total serialized response size 20799
I Splitting source org.apache.beam.sdk.io.cassandra.CassandraIO$CassandraSource#25d02f5c produced 10 bundles with total serialized response size 19359
I Splitting source [0, 1) produced 1 bundles with total serialized response size 1091
I Murmur3Partitioner detected, splitting
W Can't estimate the size
I Splitting source [0, 0) produced 0 bundles with total serialized response size 76
W Number of splits is less than 0 (0), fallback to 10
I Number of splits is 10
I Splitting source org.apache.beam.sdk.io.cassandra.CassandraIO$CassandraSource#2661dcf3 produced 10 bundles with total serialized response size 18527
But I'v got another exception:
java.io.IOException: Failed to start reading from source: org.apache.beam.sdk.io.cassandra.Cassandra...
(5d6339652002918d): java.io.IOException: Failed to start reading from source: org.apache.beam.sdk.io.cassandra.CassandraIO$CassandraSource#5f18c296
at com.google.cloud.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.start(WorkerCustomSources.java:582)
at com.google.cloud.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.start(ReadOperation.java:347)
at com.google.cloud.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:183)
at com.google.cloud.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:148)
at com.google.cloud.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:68)
at com.google.cloud.dataflow.worker.DataflowWorker.executeWork(DataflowWorker.java:336)
at com.google.cloud.dataflow.worker.DataflowWorker.doWork(DataflowWorker.java:294)
at com.google.cloud.dataflow.worker.DataflowWorker.getAndPerformWork(DataflowWorker.java:244)
at com.google.cloud.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:135)
at com.google.cloud.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:115)
at com.google.cloud.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:102)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: com.datastax.driver.core.exceptions.SyntaxError: line 1:53 mismatched character 'p' expecting '$'
at com.datastax.driver.core.exceptions.SyntaxError.copy(SyntaxError.java:58)
at com.datastax.driver.core.exceptions.SyntaxError.copy(SyntaxError.java:24)
at com.datastax.driver.core.DriverThrowables.propagateCause(DriverThrowables.java:37)
at com.datastax.driver.core.DefaultResultSetFuture.getUninterruptibly(DefaultResultSetFuture.java:245)
at com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:68)
at com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:43)
at org.apache.beam.sdk.io.cassandra.CassandraServiceImpl$CassandraReaderImpl.start(CassandraServiceImpl.java:80)
at com.google.cloud.dataflow.worker.WorkerCustomSources$BoundedReaderIterator.start(WorkerCustomSources.java:579)
... 14 more
Caused by: com.datastax.driver.core.exceptions.SyntaxError: line 1:53 mismatched character 'p' expecting '$'
at com.datastax.driver.core.Responses$Error.asException(Responses.java:144)
at com.datastax.driver.core.DefaultResultSetFuture.onSet(DefaultResultSetFuture.java:179)
at com.datastax.driver.core.RequestHandler.setFinalResult(RequestHandler.java:186)
at com.datastax.driver.core.RequestHandler.access$2500(RequestHandler.java:50)
at com.datastax.driver.core.RequestHandler$SpeculativeExecution.setFinalResult(RequestHandler.java:817)
at com.datastax.driver.core.RequestHandler$SpeculativeExecution.onSet(RequestHandler.java:651)
at com.datastax.driver.core.Connection$Dispatcher.channelRead0(Connection.java:1077)
at com.datastax.driver.core.Connection$Dispatcher.channelRead0(Connection.java:1000)
at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:363)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:349)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:341)
at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:287)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:363)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:349)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:341)
at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:363)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:349)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:341)
at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:293)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:267)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:363)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:349)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:341)
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1334)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:363)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:349)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:926)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:129)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:642)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:565)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:479)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:441)
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)
at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:144)
... 1 more
Maybe there is a mistake: CassandraServiceImpl.java#L220
And this statement looks like mistype: CassandraServiceImpl.java#L207
Changes I'v done to CassandraIO code
As #jkff proposed, I've change CassandraIO in the way I needed:
#VisibleForTesting
protected List<BoundedSource<T>> split(CassandraIO.Read<T> spec,
long desiredBundleSizeBytes,
long estimatedSizeBytes) {
long numSplits = 1;
List<BoundedSource<T>> sourceList = new ArrayList<>();
if (desiredBundleSizeBytes > 0) {
numSplits = estimatedSizeBytes / desiredBundleSizeBytes;
}
if (numSplits <= 0) {
LOG.warn("Number of splits is less than 0 ({}), fallback to 10", numSplits);
numSplits = 10;
}
LOG.info("Number of splits is {}", numSplits);
Long startRange = MIN_TOKEN;
Long endRange = MAX_TOKEN;
Long startToken, endToken;
String pk = "$pk";
switch (spec.table()) {
case "table1":
pk = "table1_pk";
break;
case "table2":
case "table3":
pk = "table23_pk";
break;
}
endToken = startRange;
Long incrementValue = endRange / numSplits - startRange / numSplits;
String splitQuery;
if (numSplits == 1) {
// we have an unique split
splitQuery = QueryBuilder.select().from(spec.keyspace(), spec.table()).toString();
sourceList.add(new CassandraIO.CassandraSource<T>(spec, splitQuery));
} else {
// we have more than one split
for (int i = 0; i < numSplits; i++) {
startToken = endToken;
endToken = startToken + incrementValue;
Select.Where builder = QueryBuilder.select().from(spec.keyspace(), spec.table()).where();
if (i > 0) {
builder = builder.and(QueryBuilder.gte("token(" + pk + ")", startToken));
}
if (i < (numSplits - 1)) {
builder = builder.and(QueryBuilder.lt("token(" + pk + ")", endToken));
}
sourceList.add(new CassandraIO.CassandraSource(spec, builder.toString()));
}
}
return sourceList;
}
I think this should be classified as a bug in CassandraIO. I filed BEAM-3424. You can try building your own version of Beam with that default of 1 changed to 100 or something like that, while this issue is being fixed.
I also filed BEAM-3425 for the bug during size estimation.

How to make an Application for Google TV remote in Android Programming?

I have to make Like this remote Application-:Link of App Play Store
I don't have good english comm. that's why i shared a link like this i need to do so please suggest me to how to do approach.1st i was done the step i.e i created the key store and added the the key store now i want to search on same ip what are the devices but it's not detected i don't know why?
I used the reference the reference code from Code Link
sendUserActionEvent() mView == null 09 - 29 14: 20: 02.741 1464 - 1464 / com.entertailion.android.anymote E / ViewRootImpl: sendUserActionEvent() mView == null 09 - 29 14: 20: 02.771 1464 - 3440 / com.entertailion.android.anymote E / anymote: ConnectingActivity: (IOE) Could not create socket to Unknown boxjava.net.ConnectException: failed to connect to / 10.10 .20 .52(port 9551): connect failed: ECONNREFUSED(Connection refused)
at libcore.io.IoBridge.connect(IoBridge.java: 124)
at java.net.PlainSocketImpl.connect(PlainSocketImpl.java: 183)
at java.net.PlainSocketImpl.connect(PlainSocketImpl.java: 163)
at java.net.Socket.startupSocket(Socket.java: 590)
at java.net.Socket.tryAllAddresses(Socket.java: 128)
at java.net.Socket. < init > (Socket.java: 178)
at java.net.Socket. < init > (Socket.java: 150)
at javax.net.ssl.SSLSocket. < init > (SSLSocket.java: 764)
at com.android.org.conscrypt.OpenSSLSocketImpl. < init > (OpenSSLSocketImpl.java: 205)
at com.android.org.conscrypt.OpenSSLSocketFactoryImpl.createSocket(OpenSSLSocketFactoryImpl.java: 68)
at com.entertailion.java.anymote.connection.ConnectingTask.attemptToConnect(ConnectingTask.java: 360)
at com.entertailion.java.anymote.connection.ConnectingTask.connect(ConnectingTask.java: 203)
at com.entertailion.java.anymote.connection.ConnectingTask.run(ConnectingTask.java: 175)
Caused by: android.system.ErrnoException: connect failed: ECONNREFUSED(Connection refused)
at libcore.io.Posix.connect(Native Method)
at libcore.io.BlockGuardOs.connect(BlockGuardOs.java: 111)
at libcore.io.IoBridge.connectErrno(IoBridge.java: 137)
at libcore.io.IoBridge.connect(IoBridge.java: 122)
at java.net.PlainSocketImpl.connect(PlainSocketImpl.java: 183) 
at java.net.PlainSocketImpl.connect(PlainSocketImpl.java: 163) 
at java.net.Socket.startupSocket(Socket.java: 590) 
at java.net.Socket.tryAllAddresses(Socket.java: 128) 
at java.net.Socket. < init > (Socket.java: 178) 
at java.net.Socket. < init > (Socket.java: 150) 
at javax.net.ssl.SSLSocket. < init > (SSLSocket.java: 764) 
at com.android.org.conscrypt.OpenSSLSocketImpl. < init > (OpenSSLSocketImpl.java: 205) 
at com.android.org.conscrypt.OpenSSLSocketFactoryImpl.createSocket(OpenSSLSocketFactoryImpl.java: 68) 
at com.entertailion.java.anymote.connection.ConnectingTask.attemptToConnect(ConnectingTask.java: 360) 
at com.entertailion.java.anymote.connection.ConnectingTask.connect(ConnectingTask.java: 203) 
at com.entertailion.java.anymote.connection.ConnectingTask.run(ConnectingTask.java: 175)  09 - 29 14: 20: 02.821 1464 - 3440 / com.entertailion.android.anymote E / anymote: ConnectingActivity: Failed to connect
java.net.ConnectException: failed to connect to / 10.10 .20 .52(port 9552): connect failed: ECONNREFUSED(Connection refused)
at libcore.io.IoBridge.connect(IoBridge.java: 124)
at java.net.PlainSocketImpl.connect(PlainSocketImpl.java: 183)
at java.net.PlainSocketImpl.connect(PlainSocketImpl.java: 163)
at java.net.Socket.startupSocket(Socket.java: 590)
at java.net.Socket.tryAllAddresses(Socket.java: 128)
at java.net.Socket. < init > (Socket.java: 178)
at java.net.Socket. < init > (Socket.java: 150)
at com.entertailion.java.anymote.connection.ConnectingTask.attemptToPair(ConnectingTask.java: 260)
at com.entertailion.java.anymote.connection.ConnectingTask.connect(ConnectingTask.java: 209)
at com.entertailion.java.anymote.connection.ConnectingTask.run(ConnectingTask.java: 175)
Caused by: android.system.ErrnoException: connect failed: ECONNREFUSED(Connection refused)
at libcore.io.Posix.connect(Native Method)
at libcore.io.BlockGuardOs.connect(BlockGuardOs.java: 111)
at libcore.io.IoBridge.connectErrno(IoBridge.java: 137)
at libcore.io.IoBridge.connect(IoBridge.java: 122)
at java.net.PlainSocketImpl.connect(PlainSocketImpl.java: 183) 
at java.net.PlainSocketImpl.connect(PlainSocketImpl.java: 163) 
at java.net.Socket.startupSocket(Socket.java: 590) 
at java.net.Socket.tryAllAddresses(Socket.java: 128) 
at java.net.Socket. < init > (Socket.java: 178) 
at java.net.Socket. < init > (Socket.java: 150) 
at com.entertailion.java.anymote.connection.ConnectingTask.attemptToPair(ConnectingTask.java: 260) 
at com.entertailion.java.anymote.connection.ConnectingTask.connect(ConnectingTask.java: 209) 
at com.entertailion.java.anymote.connection.ConnectingTask.run(ConnectingTask.java: 175)  09 - 29 14: 20: 02.821 1464 - 3440 / com.entertailion.android.anymote I / anymote: ConnectingActivity: Pairing failed 09 - 29 14: 20: 02.821 1464 - 3440 / com.entertailion.android.anymote E / anymote: ConnectingActivity: run
java.lang.NullPointerException: null socket
at com.entertailion.java.anymote.client.AnymoteSender.attemptToConnect(AnymoteSender.java: 118)
at com.entertailion.java.anymote.connection.ConnectingTask.run(ConnectingTask.java: 177)
As far as Anymote lib implementation is concerned it had clear API available for the reason.
From Android-Anymote implementation in MainActivity you can see on Line No: 59 it is registering itself for possible connections. All available devices are available on Line No: 319 method onSelectDevice, this is the list of all visible Google TV devices on the network.

How to create a local variable with ASM?

I'm trying to patch a class with ASM. I need to add some logic in a function. This logic needs a new local variable. Here is what I've done:
class CreateHashTableMethodAdapter extends MethodAdapter {
#Override
public void visitMethodInsn(int opcode, String owner,String name, String desc){
System.out.println(opcode + "/" + owner + "/" + name + "/" + desc);
if(opcode == Opcodes.INVOKESPECIAL &&
"javax/naming/InitialContext".equals(owner) &&
"<init>".equals(name) &&
"()V".equals(desc)){
System.out.println("In mod");
// 83: new #436; //class javax/naming/InitialContext
// 86: dup
mv.visitMethodInsn(Opcodes.INVOKESPECIAL, "javax/naming/InitialContext", "<init>", "()V");
mv.visitVarInsn(Opcodes.ASTORE, 1);
Label start_patch = new Label();
Label end_patch = new Label();
mv.visitLabel(start_patch);
mv.visitTypeInsn(Opcodes.NEW,"java/util/Hashtable");
mv.visitInsn(Opcodes.DUP);
mv.visitMethodInsn(Opcodes.INVOKESPECIAL, "java/util/Hashtable", "<init>", "()V");
mv.visitVarInsn(Opcodes.ASTORE,9);
// ........ sNip ..........
mv.visitLabel(end_patch);
mv.visitLocalVariable("env","Ljava/util/Hashtable;",null,start_patch,end_patch,9);
// 127: astore_1
}
else {
mv.visitMethodInsn(opcode, owner, name, desc);
}
}
}
When I run this method adapter against CheckClassAdapter it states:
org.objectweb.asm.tree.analysis.AnalyzerException: Error at instruction 51: Trying to access an inexistant local variable 9
.... sNiP ....
00050 R R . . . : R R : INVOKESPECIAL java/util/Hashtable.<init> ()V
00051 R R . . . : R : ASTORE 9
I think I misuse the visitLocalVariable, but I can not find out where I'm supposed to call it.
When I javap generated bytecode (without checking), I get the following local variables table:
LocalVariableTable:
Start Length Slot Name Signature
91 40 9 env Ljava/util/Hashtable;
0 343 0 this Lpmu/jms/ServerJMS;
132 146 1 initialContext Ljavax/naming/InitialContext;
153 125 2 topicConnectionFactory Ljavax/jms/TopicConnectionFactory;
223 55 3 topic Ljavax/jms/Topic;
249 29 4 topicSubscriber Ljavax/jms/TopicSubscriber;
279 55 1 ex Ljava/lang/Exception;
281 53 2 codeMessage I
289 45 3 params Lpmu/data/Parameters;
325 9 4 messageError Ljava/lang/String;
As you may notice, my variable is here but topmost ?!
Any idea ?
One convenient way to create new local variables is to extend LocalVariablesSorter instead of MethodAdapter. Then you can allocate local variables as needed using newLocal() without interfering with existing variables. See section 3.3.3 of the ASM 4.0 A Java bytecode engineering library on the ASM homepage for details.

Categories

Resources